tIOT:"WOE [OTSO] [^8Z8L*EOFT:^!XIP Alexander Merry Magdalen College University of Oxford
A thesis submitted for the degree of
Doctor of Philosophy
Trinity 2013
Acknowledgements
Firstly, I would like to thank my supervisors, Professor Samson Abramsky and Professor Bob Coecke, for giving me the opportunity to do this research and supporting me in my efforts, as well as for starting the field that was the motivation for this work. I also owe an immense debt of gratitude to Dr. Aleks Kissinger, who both paved the way for my work and gave me invaluable advice, encouragement and feedback on the way.
I wish to thank Dr. Lucas Dixon and Dr. Ross Duncan for the work they put into Quantomatic, both the theory and the implementation, without which this work could have taken much longer. Additionally, I owe thanks to Matvey Soloviev for his bright ideas in the DCM paper,which really opened up the possibility of reasoning about - graphs using graph rewriting.
Thanks also to the administrative staff of the Department of Computer Science for helping me navigate university bureaucracy, and especially to Julie Sheppard and Janet Sadler.
Finally, a huge thank you to my family and friends, who have supported and encouraged me over the last four years, through the hard times and the good. I couldn't have done it without you.
Abstract
The aim of this thesis is to present an extension to the string graphs of Dixon, Duncan and Kissinger that allows the finite representation of certain infinite families of graphs and graph rewrite rules, and to demonstrate that a logic can be built on this to allow the formalisation of inductive proofs in the string diagrams of compact closed and traced symmetric monoidal categories.
String diagrams provide an intuitive method for reasoning about monoidal categories. However, this does not negate the ability for those using them to make mistakes in proofs. To this end, there is a project (Quantomatic) to build a proof assistant for string diagrams, at least for those based on categories with a notion of trace. The development of string graphs has provided a combinatorial formalisation of string diagrams, laying the foundations for this project.
The prevalence of commutative Frobenius algebras (CFAs) in quantum information theory, a major application area of these diagrams, has led to the use of variable-arity nodes as a shorthand for normalised networks of Frobenius algebra morphisms, so-called "spider notation". This notation greatly eases reasoning with CFAs, but string graphs are inadequate to properly encode this reasoning.
This dissertation firstly extends string graphs to allow for variable-arity nodes to be represented at all, and then introduces !-box notation - and structures to encode it - to represent string graph equations containing repeated subgraphs, where the number of repetitions is abitrary. This can be used to represent, for example, the "spider law" of CFAs, allowing two spiders to be merged, as well as the much more complex generalised bialgebra law that can arise from two interacting CFAs.
This work then demonstrates how we can reason directly about !-graphs, viewed as (typically infinite) families of string graphs. Of particular note is the presentation of a form of graph-based induction, allowing the formal encoding of proofs that previously could only be represented as a mix of string diagrams and explanatory text.
5.2 Rewriting String Graphs with !-Graph Equations 85
5.3 Rewriting !-Graphs With !-Graph Equations . 85
5.4 Soundness of !-Graph Rewriting | 90
6 A Logic of !-Graphs 99
二
6.1 !-Graph Equational Logic |99|
6.2 Regular Forms of Instantiations 100
6.2.1 Depth-Ordered Form 101
6.2.2 Expansion-Normal Form 108
6.3 !-box Introduction 110
6.4 !-box Induction 112
6.4.1 The Generalised Bialgebra Law 117
6.5 Merging !-boxes 125
6.6 A More Traditional Logic 130
7 Computability of !-Graph Matching 135
7.1 Enumerating Instances 135
7.2 Matching in Quantomatic 138
7.2.1 Matching Onto !-Graphs 142
8 Conclusions and Further Work 147
8.1 Further Work 148
A Correctness of Quantomatic's Matching 151
B The Spider Law 163
Bibliography 168
Chapter 1
Introduction
In the early 1950s, as a research student attending lectures on differential geometry, Roger Penrose developed a pictoral notation for dealing with tensors. He started using this in publications some twenty years later, most notably in [32]. His motivation for using this notation was as an exposition and working aid when dealing with the tensor index notation common in various areas of physics. [32], in which Penrose generalised and formalised tensor index notation as abtract tensors, is liberally illustrated with these diagrams. In a tensor expression like
where we can consider the large letters to be morphisms with subscripts as inputs and superscripts as outputs, we have to keep careful track of which indices are repeated (and therefore should be contracted - the notion of composition for abstract tensors). This is made clear in the following diagrammatic representation
These string diagrams (or wire diagrams) are particularly good at capturing information with a planar (or mostly planar), rather than linear, structure. Thus it was quite natural that they came to be used for monoidal categories,which have an associative,unital bifunctor that acts as a form of composition orthogonal to the categorial composition.
Note that, unlike in the traditional diagrams of category theory, string diagrams represent morphisms as nodes and objects as wires. This allows categorical composition to be represented by joining wires, as in
while the monoidal composition can be represented via juxtaposition
Twenty years after Penrose's publication, Joyal and Street formalised the link between string diagrams and monoidal categories, at least in part 20. In particular, they showed that progressive plane diagrams, a generalised topological graph with directed wires that do not cross or have loops, exactly capture the axioms of monoidal categories. Additionally, they showed that allowing crossings captures the axioms of symmetric monoidal categories, which have a natural isomorphism witnessing that for all objects and ,and that other variations on these diagrams capture braided and balanced monoidal categories.
Joyal and Street planned a follow-up paper to treat non-progressive diagrams, such as those with loops, but this was never published. The link between these diagrams and either traced symmetric monoidal categories or compact closed categories (depending on the restrictions placed on the diagrams) was shown in [23].
We concentrate on traced symmetric monoidal categories and compact closed categories in this thesis. Selinger's survey paper 37 gives a more comprehensive overview of graphical languages for various types of monoidal categories.
These diagrammatic languages found a home in the study of quantum computer science, when in 2004 Samson Abramsky and Bob Coecke recast von Neumann's axiomatisation of quantum mechanics into category theory, allowing the use of string diagrams. The diagrams proved to be particularly helpful in illuminating the role of entanglement in various quantum mechanical and quantum informational protocols, and provided a grounding for intuition in something familiar (the physical manipulation of a diagram) for something that, at many times, appears to behave counter-intuitively (quantum mechanics).
In this setting, the wires of a string diagram represent quantum mechanical systems, and the nodes are operations on those systems. Of particular relevance to this thesis is the importance of commutative Frobenius algebras in these languages.
A Frobenius algebra in a monoidal category consists of a monoid and a comonoid on an object of - the monoid being an associative multiplication operation with unit ,and the comonoid being its dual with counit - that interact "nicely". Specifically, they obey the Frobenius law:
If we represent the monoid and comonoid graphically as and ,then this law can be drawn
Commutative Frobenius algebras (CFAs) are those whose monoid is commutative and comonoid is cocommutative. They have the property that the value of any connected (in the graphical language) network of components of a given CFA is determined purely by the number of inputs, outputs and loops it has. This gives rise to a consise and useful representation of a CFA, the "spider":
Loops are encoded in this representation as self-loops (by connecting outputs to inputs).
This leads to the spider law, which allows us to merge two connected spiders of the same type
and this law significantly simplifies working with these graphical languages.
While the use of string diagrams can help greatly with understanding and intuition, constructing large proofs by hand can still be tedious and error-prone. This is where projects like Quantomatic [25], a collection of automated tools for graphical languages, come in.
The goal of Quantomatic is to produce a useful proof assistant for graphical languages; as its name suggests, it was originally aimed at the languages being developed for quantum computation. For example, Dixon and Duncan used it to verify the correctness of the Steane code in [12]. It is still primarily used for quantum languages, but it aims to be useful in other fields as well.
Dixon, Duncan and Kissinger provided a theoretical underpinning for Quantomatic's automated reasoning in the form of string graphs, introduced as open graphs in [10] and further refined by Dixon and Kissinger in [11] and then by Kissinger in [23]. These represent string diagrams as typed graphs, where the vertex types are sorted into node-vertices and wire-vertices, the former being the "real" vertices from the string diagram and the latter a kind of "dummy" vertex holding the wires in place (as well as carrying the wire type). For example, the string diagram
could be represented by the graph
This allows us to do equational reasoning for diagrams using graph rewriting, in a similar manner to how traditional proof assistants use term rewriting.
Note that string graphs are not the only combinatorial representation for the categorical structures that underlie string diagrams. In particular, Hasegawa, Hofmann and Plotkin use a bijective map to describe the connections formed by wires in their paper showing that finite dimensional vector spaces are complete for traced symmetric monoidal categories 18. Using graphs, however, allows us to build on a substantial body of established work on graph rewriting.
The first aim of this dissertation is to extend this formalism to allow tools like Quantomatic to work with the spider law and other,more complex,rules using the !-graphs posited in [9] as graph patterns. In particular, we aim to give a mathematical underpinning to these, which were previously only treated informally.
!-graphs are string graphs with demarked subgraphs called !-boxes that can be repeated any number of times, allowing an infinite family of graphs to be represented as a single graph:
This can be extended to string graph equations, allowing the spider law to be represented as a
!-graph equation:
We call a string graph or string graph equation represented by a !-graph or !-graph equation an instance of that graph or rule.
This notation may seem overpowered for simply representing the spider law, but it also allows many equations describing interactions between certain kinds of CFA, such as
which can be represented
and even more complex examples that we will describe in later chapters.
The second aim of this dissertation is to provide the beginnings of a formal system for reasoning about (and not just with) these !-graphs. We show that rewrite rules composed of !-graphs can be used to rewrite !-graphs and, in doing so, perform equational reasoning on these infinite families of string graphs. What is more, we introduce some additional rules to this logic of !-graphs, including a form of graphical induction that encodes inductive proofs that could previously only be done with a mix of diagrams and explanatory text.
Chapter 2 describes various types of monoidal category and the graphical languages they induce. It also provides some motivating examples in the form of a subset of the calculus for quantum information processing.
Chapter 3 provides a brief introduction to rewriting as a way of doing equational reasoning, using the well-established field of term rewriting. It then presents an equational logic for traced symmetric monoidal categories and compact closed categories using string graphs, and shows how this can be implemented using string graph rewriting. We present a slightly different construction of string graphs to [23], as Kissinger's construction does not allow for variable-arity nodes, required by the spiders of the quantum graphical languages.
Chapter 4 describes the !-box notation and its encoding in !-graphs. It provides a set of operations on -boxes (COPY,DROP and KILL) that are used to define the concrete instances of a -graph (ie: those string graphs it represents).
Chapter 5 introduces !-graph equations to represent families of string graph equations, and demonstrates how their directed form, !-graph rewrite rules, can be used to rewrite not only string graphs but also -graphs. The latter produces further -graph equations that are sound with respect to their interpretation as families of string graph equations. This effectively allows us to rewrite a potentially infinite family of string graphs simultaneously.
Chapter 6 builds on this to construct a nascent logic for reasoning about !-graphs. In particular, we construct an analogue of induction for string graphs which functions as a !-box introduction rule, and demonstrate how this can be used to derive
in the calculus. We also demonstrate a derivation of the generalised bialgebra law,which we introduce at the end of chapter 2
Chapter 7 demonstrates an algorithm for finding instances of !-graphs that match string graphs or other -graphs,allowing for practical implementations of rewriting with -graph rewrite rules. The complete matching process implemented in Quantomatic for rewriting string graphs with !- graph rewrite rules is described, as well as the planned extension to it that will allow !-graphs to be rewritten.
Finally, chapter 8 presents our conclusions and areas of further work.
Chapter 2
Diagrammatic Reasoning
This chapter provides a brief summary from the existing literature of various types of monoidal categories that are relevant to this thesis, and the diagrammatic representations of their morphisms. It also demonstrates their utility in reasoning about such categories using examples drawn from existing work in the area of graphical languages for quantum computation.
2.1 Monoidal Categories
Monoidal categories provide a useful setting for reasoning about processes of all kinds. In addition to the usual functional composition, they allow a form of parallel composition via the tensor product.
Definition 2.1.1 (Monoidal Category). A monoidal category is a category equipped with
a bifunctor ,called the tensor product;
a distinguished object of ,called the tensor unit; and
three natural isomorphisms
where and the following diagrams commute:
Example 2.1.2. The category Set of sets is monoidal,with the cartesian product as and the single-element set as . Alternatively,the disjoint union of sets and the empty set can be used as the monoidal structure.
The category of vector spaces over a field and linear functions between them is a monoidal category,where is the tensor product of vector spaces,and is as a one-dimensional vector space.
In later examples,we will use the fact that if and are vector spaces over ,then each bilinear map induces a unique linear map . We can use this to construct the natural isomorphisms and from the maps
When considering (categorical) diagrams of a monoidal category involving the above isomorphisms, Mac Lane's coherence theorem 29 allows us to work instead in a category where those isomorphisms are identities; we call this category the strictification of . Formal categorical diagrams (of the sort seen in definition 2.1.1) commute in if and only if they commute in its strictification. Therefore, we abuse notation slightly by omitting paretheses and, insofar as possible, ,writing
instead of
for example.
In [32], Penrose presented a graphical language he had developed for representing tensor networks. Restricted variants of this language can be used for talking about more general monoidal categories.
In this notation, we represent objects of the category (and identity morphisms on those objects) as lines,and morphisms as boxes,also called nodes. For example,the morphism could be depicted
where we read the diagram in a downwards direction. The tensor product is depicted by placing things side-by-side,so if then is and is
One convenience that this immediately provides, other than requiring less mental effort to parse than conventional symbolic notation, is that the equation
a consequence of the fact that is a bifunctor - is implicit in the graphical notation; both sides of the equation are, in fact, represented by the same diagram
The tensor product has a rigid order; it is common to allow more flexibility, in a manner akin to rerouting information. This is provided by a symmetric monoidal category.
Definition 2.1.3 (Symmetric Monoidal Category). A symmetric monoidal category, or SMC, is a monoidal category equipped with a natural isomorphism
such that and the following diagram commutes:
Example 2.1.4. Set with either cartesian products or disjoint unions is symmetric,as is . In the latter case, can be defined (see example 2.1.2) using the map
Diagrammatically,we represent as
The fact that is a natural isomorphism is represented in this notation as an ability to "slide" morphisms up and down wires:
Joyal and Street presented 20 a formalisation of Penrose's diagrams based on topological graphs. They showed that progressive polarised diagrams, essentially those without loops, can be used to form free symmetric monoidal categories; effectively, they exactly capture the axioms of a symmetric monoidal category.
There are plenty of scenarios where we do want to introduce loops, whether to represent the trace (or partial trace) operation on a matrix, looping in a computation or even the effects of entanglement in a quantum mechanical system, where information can sometimes appear to flow backwards in time.
A trace operation, if one exists, can be used to capture a looping construction.
Definition 2.1.5 (Traced Symmetric Monoidal Category). A traced symmetric monoidal category is a symmetric monoidal category together with a trace operation: a function
for all objects that satisfies the following conditions:
Example 2.1.6. Vect is a traced symmetric monoidal category,with the partial trace of linear maps as the trace operation.
is represented as
and, as before, the properties of the trace operation are subsumed into its representation in the graphical language, either directly (as with 4 or 5 , for example, where both sides of the equation have the same representation) or via "natural" graphical manipulations. For example, condition 2 corresponds to sliding a morphism round a trace loop from one end to the other: although care must be taken, as the following diagram has no meaning in a traced SMC:
(but see remark 2.1.10 below). Condition 6 is visually represented as an ability to "yank" loops with no (non-identity) morphisms straight:
Finally, we come to compact closed categories. These are based around dual objects, which are the categorisation of dual spaces from linear algebra.
Definition 2.1.7 (Dual Objects). Let and be objects in a monoidal category. is called a left dual of (and a right dual of ) if there exist morphisms and such that
and
Note that in a symmetric monoidal category,if is a left dual of ,it is also a right dual of and vice versa,so we simply say that is the dual of .
Definition 2.1.8 (Compact Closed Category). A compact closed category is a symmetric monoidal category where every object has a dual.
We add the notion of dual objects to the graphical language using arrows. If an object is represented by a downward-directed wire, its dual is represented by an upward-directed one. The maps and can then be drawn
respectively. The equations in the above definition can then be drawn
Example 2.1.9. Consider a finite-dimensional vector space vector space and its (algebraic) dual space, . Given a basis of ,there is a corresponding basis of such that
We can then define the map to be induced (see example 2.1.2) by
and to be
So if we consider as an object of ,the category of finite-dimensional vector spaces over is a categorical dual of . Thus is compact closed (although is not). Note,however,that our definitions of and are not the only possibilities.
Given the graphical notation, we would expect to be able to construct a trace operation using these maps; this is indeed the case 21. is
or, graphically,
Remark 2.1.10. Joyal, Street and Verity's Int construction 21 allows a traced SMC to be embedded in a compact closed category with formal dual objects. This allows the diagrams for compact closed categories to be given meaning when working with traced SMCs.
2.2 Quantum Computation with Diagrams
An area which is making active use of these diagrammatic languages is quantum information processing. Starting with a paper by Abramsky and Coecke in 2004 1, there has been considerable work in axiomatising quantum mechanics in a categorical setting that admits diagrammatic reasoning. Indeed, a major application of the ideas set out in this thesis is to make this work easier. We will therefore present a brief introduction to one of these languages, both as motivation and as a source of examples.
Most of the languages that have been developed in this area are based on compact closed categories; in particular, the usual model for them is the category FDHilb of finite dimensional Hilbert spaces.
Definition 2.2.1. A Hilbert space is a real or complex inner product space that is a complete metric space with respect to the distance metric
Note that we can ignore the compactness requirement, as it is satisfied by all finite-dimensional inner product spaces.
FDHilb is a compact closed category,with the categorical dual of a Hilbert space being the algebraic dual ,as in FDVect (example 2.1.9). As such,we can make use of the diagrammatic language already introduced for such categories.
A structure that has particular importance in this formulation of quantum mechanics is the Frobenius algebra, a monoid and comonoid pair that interact particularly well.
A monoid in a monoidal category as a triple where is an object of ,and : and are morphisms satisfying unit and associativity laws:
Graphically,we can represent the monoid as ; these laws can then be drawn
and
A comonoid is the dual of this: a triple with and satisfying counit and coassociativity laws:
Graphically,this is satisfying
and
式底
A monoid and comonoid on the same object form a Frobenius algebra if they satisfy the Frobenius
law:
or, symbolically,
A Frobenius algebra is commutative if the monoid is commutative and the comonoid is cocommu-
tative:
Commutative Frobenius algebras (CFAs) appear repeatedly in quantum graphical languages, whether to describe entanglement [5, 6] or classical data 8. Thus the practical utility of these languages is greatly improved by a consequence of the Frobenius law, namely that any connected (in the diagrammatic language) network of multiplications, comultiplications, units and counits of a commutative Frobenius algebra has a unique normal form consisting of a series of multiplications followed by a series of loops followed by a series of comultplications:
This allows a more compact representation, generally referred to as a "spider":
where the loops are encoded by connecting outputs of the spider to inputs. Note that our original multiplication, comultiplication, unit and counit are subsumed into this notation. This leads to the spider law, which allows us to merge two connected spiders of the same type
The spider law and the spider identity law
completely capture the laws of a CFA.
There are various quantum graphical languages built around CFAs, including the Z/X calculus [5], the trichromatic calculus 28 and the GHZ/W calculus 6 . There are also several languages built on top of the calculus,such as Duncan and Perdrix’s calculus for measurement-based quantum computation 13. The GHZ/W calculus even admits an encoding of rational arithmetic [7].
We will use a restricted version of the calculus (without angles or scalars) to provide motivating examples. This consists of two special CFAs, called the X and Z CFAs, together with certain rules governing their interaction.
Definition 2.2.2. A CFA is special if it satisfies
The "spiderised" version of this is
Traditionally,the CFA has been represented with red nodes,and the CFA with green nodes. However, we will follow [23] in using grey for X and white for Z, if only to spare colourblind readers some frustration. Thus our CFAs are
and
In addition to the laws of a special CFA for each colour of node, we have the following laws:
We name these laws,for ease of reference, copies copies ,the bialgebra law,scalar elimination and the dual law, in order.
For the dual law we might expect certain symmetries:
Indeed, we can use the laws of a CFA together with the dual law to derive these. For example, the last one can be derived as follows:
This fairly simple proof can be written in "traditional" mathematical notation as well, but - because we have to explicitly use various laws and axioms that are inherent in the diagrammatic language - it ends up being long and tedious. We present such a proof below to demonstrate this. For reference, the dual law written symbolically is
Then we have
Even with explanatory notes about what law is being used when, this proof is hard to follow - and was hard to construct - compared to the graphical notation.
Note that we can make the graphical version even cleaner by using spider notation:
The other two symmetries rely on commutativity. In particular, we have
and similarly with the colours swapped.
We can create "spiderised" versions of the copying laws
and we can prove that these are correct by induction on . The proof is fairly simple,and we will just do the copies version,since the other is symmetric. The base case follows by the scalar elimination laws. So consider :
We can construct a similarly spiderised version of the bialgebra law with some trivial applications of the spider law
but there is also a generalised version of this involving the -bipartite graph of and spiders (the normal bialgebra law is the case when ). Graphically,this looks something like
but it is hard to see what this is supposed to represent without an accompanying textual description. For the same reason, proving this graphically is a messy affair, and convincing yourself that such a proof is correct is even harder.
Chapters 4 and 5 will introduce a formal notation capable of expressing this generalised bialgebra law, and chapter [6] will provide the techniques necessary to prove it formally, together with an informal demonstration of the proof.
Chapter 3
Rewriting
Equational reasoning is core to many problems in mathematics and computer science, from proving that a left unit is a right unit in a commutative ring to optimising functional programs.
The traditional approach to mechanising equational reasoning is to use term rewriting to implement equational logic. We will briefly describe this approach before presenting a related mechanisa-tion scheme, adapted from the existing literature (and [23] in particular), that is much more suited to the examples in the preceeding chapter.
3.1 Term Rewriting and the Word Problem
In order to reason formally about mathematical equations, we need a symbolic language to express those equations. This section summarises the relevant parts of a large body of existing literature on term rewriting and universal algebra; a more comprehensive introduction can be found in [3].
A signature provides symbols for the "fixed" elements of a theory: for example, a multiplication operator or its unit. It is simply a set of symbols ,each with an associated non-negative integer known as the arity 1 . In the case of group theory,we might have ,with respective arities 0,1 and 2 .
If we also have a set of variables disjoint from ,we can start to write down terms such as (where is a variable). The set of possible -terms over ,is defined inductively:
if then
if with arity and then
We can now encode equations using terms. A -equation is just a pair of terms that we wish to consider equal. We write such equations ,where and are -terms,to make it clear that and are not the same term,but should be considered equal in the theory.
For simplicity,we are presenting untyped terms here
The axioms of our theory,for example the axioms of a group,are encoded as a set of -equations, and equational logic 17 describes how we can use those axioms to derive further equalities via the following inference rules, adapted for the notion of signature we are using in [3]
where is a substitution that replaces variables with other terms and is an -ary function symbol of .
We commonly wish to determine whether two -terms and are equal by this logic under a certain set of axioms ; in other words,is it the case that ? This is known as the word problem and is a major application of term rewriting.
The obvious way to approach the word problem is to attempt to use the rules to construct a proof that two terms are equal directly. Term rewriting, in constrast, turns one term into another term that is equal according to the Axiom, SUBST and LEIBNIZ rules. This is done by directing equations to make them rewrite rules,written .
A set of terms can be turned into a term rewrite system in this way,and we define the relation such that if and only if there is a substitution and a rewrite rule in such that occurs in ,and replacing that occurrence by yields . For example,if ,then we can deduce that
Matching is the process of finding an appropriate substitution and locating occurences of in ,and is decidable. Hence,given a finite set of rewrite rules and a term ,it is possible to find all terms such that .
The utility of this approach comes from the observation that the reflexive, symmetric, transitive closure of - which we write - corresponds exactly to the of equational logic; in other words,if we allow ourselves to view as a set of equations,
This observation means,in particular,that if we can find a common term such that and (where is the transitive closure of ),then we can deduce that . For term rewrite systems that are terminating (there is no infinite chain ) and confluent (if and then there is a term such that and ),the word problem is decidable.
While terms, which can have multiple inputs (the variables) but only one output, are well-suited for encoding algebras, they are less useful for structures like coalgebras, where we can have multiple outputs. For example,while a multiplication map in a monoidal category is easy to represent as a binary term,a comultiplication map does not have such a clear symbolic representation.
Taking our cue from the diagrams of chapter 2 we will use graphs to construct a combinatorial symbolic representation of the sort of maps found in a monoidal category.
3.2 Monoidal Signatures
As with term rewriting, we need a way to represent items of interest from the theory we are reasoning about; in this case, these items correspond to objects and morphisms in a monoidal category.
Definition 3.2.1 (Monoidal Signature: [23], pp 30). A (small, strict) monoidal signature (O, M, dom, cod) consists of two sets and together with a pair of functions dom,cod : into lists of elements of .
The intended interpretation of a monoidal signature is that the elements of are representations of objects of a monoidal category, is (some of) its morphisms and dom and cod give the types of the domain and codomain of the morphisms. An empty list corresponds to ,the tensor unit.
Definition 3.2.2 (Monoidal Homomorphism: [23],pp 31). If and are monoidal signatures,a monoidal signature homomorphism consists of functions and making the following diagrams commute:
Recall from chapter 2 the "spiders" that arise from Frobenius algebras. These are families of morphisms that differ only in how many inputs or outputs of a certain type they have. Rather than having an infinite monoidal signature with an object in for each possible combination of inputs and outputs, it would be useful to have a single symbol with variable arity. To this end, we adapt the definition of a monoidal signature:
Definition 3.2.3 (Compressed Monoidal Signature). If and are sets and dom,cod : are functions into lists of pairs of elements of and either or ,then is a compressed monoidal signature.
and are intended to indicate whether the input or output is of variable or fixed (single,in fact) arity. If it has variable arity, it can be removed or duplicated.
Of course, if this is to be a true compressed representation of a larger monoidal signature, we need a way to reconstruct that monoidal signature from the compressed version.
Definition 3.2.4 (Expansion). Let be a compressed monoidal signature.
For each ,let be the indices of the -tagged elements of ,and the indices of the -tagged elements of . The expansion of , ,is where
be with,for all indices ,the th element replaced with if it is and copies of if it is
be with,for all indices ,the th element replaced with if it is and copies of if it is
is said to be a compressed monoidal signature for a monoidal signature if the expansion of is isomorphic to . Compressed monoidal signatures that make no use of correspond exactly to ordinary monoidal signatures.
Example 3.2.5. Consider the compressed monoidal signature where
To make the description simpler, and more evocative of the intended interpretation, we use the following notation:
The expansion of this signature would include
and
Since the elements of a compressed monoidal signature are meant to represent families of morphisms in a category, we need a way of determining which morphisms they are supposed to represent. This is provided by a valuation.
Note: if is a list,we use to refer to the set of indices of .
Definition 3.2.6 (Arity Count). Let be a compressed monoidal signature and . Then an arity count for is a pair of maps and such that the index of each -tagged element maps to 1 .
Definition 3.2.7 (Valuation). A valuation of a compressed monoidal signature over a monoidal category is a map from to the objects of together with a map for each that takes arity counts for to morphisms of with
where is the monoidal identity ,and .
The valuation is said to be expansion-order invariant if is a symmetric monoidal category and for all and all arity counts for ,if is the index of a variable-arity entry of and ,then
where is the map
where ,and
where and are defined analagously for and .
Expansion-order invariance is intended to capture the idea that, in the expansion of a compressed monoidal signature,a morphism should be commutative on the inputs derived from the same - tagged input,and cocommutative on outputs derived from the same -tagged output.
Example 3.2.8. Suppose we have a CFA (page 13) on an object of a symmetric monoidal category
and the compressed monoidal signature
If we want to represent the spiderised version of the CFA,we can construct a valuation such that and is copies of followed by copies of ,arranged in the following form:
If ,we use to get a domain of ,and if we use to get a codomain of .
This valuation is expansion-order invariant because CFAs are (co-)commutative and (co-)associative, so precomposing or postcomposting the map above with swap maps does not affect the value of the morphism.
3.3 Graphs
A well-established categorical construction for directed graphs is to use functors into Set, the category of (small) sets:
Definition 3.3.1. Graph is the category of functors from the category
to Set.
Each element of Graph therefore consists of two sets,one of edges (the image of ) and one of vertices (the image of ),and a pair of functions selecting the source and target of each edge. We say that an edge is incident to both its source and its target, and call the set of edges incident to a vertex the edge neighbourhood of . Also,the source of an edge is adjacent to its target, and vice versa.
We can separate vertices and edges into types through the use of a typegraph. This also allows the possible sources and targets of the edges to be constrained. For a given graph ,we can construct the category ,the slice category over . The elements of this category can be viewed as pairs of a graph and a graph morphism ,known as the typing morphism, and the morphisms of the category are those graph morphisms that respect the typing morphisms, in the sense that
commutes. We call a -typed graph.
If is a subgraph of ,then any -typed graph can be viewed as a -typed graph,simply by reinterpreting the codomain of the typing function (or, equivalently, composing the typing function with the embedding morphism from to ). The general theorem,stated in categorical terms,is as follows:
Theorem 3.3.2. Let be a category,and a monomorphism in . Then induces a full embedding functor . If has pullbacks along monomorphisms, has a right adjoint ,and is the identity functor.
Proof. If is an object of ,we define ,and let be the identity on morphisms. Recall that a morphism of is a morphism of if and only if
But if this is the case, then we must have
and hence is a well-defined functor. Since is monic,the converse is also true,and so is full.
Now suppose has pullbacks along monomorphisms; we need to show that has a right adjoint . Given an object in ,we define to be in the following pullback: which exists since is a monomorphism. Now let be a morphism of . We define to be in the following diagram,which exists and is unique by pullback:
The natural transformations and witness that is right adjoint to ,and that .
Remark 3.3.3. Note that it is the case that,for any graph ,monomorphisms are exactly injective functions in both Graph and Graph . It follows that if is a graph monomorphism, and in the above theorem preserve and reflect monomorphisms.
We will sometimes, as in the next proposition, use the term pushout of monomorphisms; this refers to the pushout of a span of monomorphisms, as distinct from a pushout along a monomorphism where only one arrow in the span needs to be monic.
Proposition 3.3.4. Let and be graphs and a monomorphism. Then the functors and from theorem 3.3.2 preserve pushouts of monomorphisms.
Proof. All left adjoint functors preserve pushouts,and so does in particular.
Consider the following pushout of monomorphisms in Graph/T:
By the Set-based construction of Graph,we know that the above is a pushout if and only if is covered by and and the intersection of those images is exactly the image of (note that this is only true because all the arrows in the diagram are monomorphisms).
We know that the following diagram commutes and consists of monomorphisms in :
Recall the construction of,for example,the map :
It suffices to show that,for each if and only if and similarly for the other morphisms of the pushout. The argument is the same for each morphism, so we will only treat . We can see from the left square of the diagram that if ,then
For the converse,suppose ,and let be its (unique) preimage under . Now, ,by the pullback square defining . So then ,since . So by the pullback defining and its preimage must map to under . Hence .
3.3.1 String Graphs
Arbitrary graphs, however, are too general for our purposes. They have no concept of inputs or outputs, and so there is no clear way to represent a morphism. We therefore make use of string graphs, introduced as open graphs in [10] and further refined in [23]. These separate vertices into two kinds: node-vertices and wire-vertices. One way of viewing these is that, in translating the diagrams we saw in chapter 2 into string graphs, node-vertices correspond to the nodes of those diagrams, while wire-vertices "hold the wires in place". For example, the diagram
(S.)
could be represented as
where the larger white circle is a node-vertex and the smaller black ones are wire-vertices.
Just as terms were defined relative to a signature, we will define string graphs relative to a compressed monoidal signature. We will use a typegraph, so that the type of each node-vertex indicates which morphism symbol it represents, and the type of each wire-vertex indicates which object symbol it represents.
The following is adapted from the definition of a derived typegraph ([23], pp 88).
Definition 3.3.5. Given a compressed monoidal signature ,the derived compressed typegraph has vertices ,a self-loop mid for every and,for every , - an edge from to ,where ,for each index of the list ,and - an edge out from to ,where ,for each index of the list .
Example 3.3.6. Recall the compressed monoidal signature from example 3.2.5 .
The derived compressed typegraph would then be
The typegraph for the (spider) calculus (section 2.2) is very simple:
Let be a compressed monoidal signature. Then for a graph in Graph and a vertex in ,if is in ,we call a wire-vertex. Otherwise, must be in ,and we call a node-vertex. We denote the set of wire-vertices and the set of node-vertices . If is a node-vertex and is an edge incident to ,we call variable-arity if is -tagged (eg: ) and fixed-arity if it is -tagged (eg: ). The fixed edge neighbourhood of a node-vertex is the set of edges in the edge neighbourhood of that are fixed-arity.
In general, we will depict wire-vertices as small black dots.
Definition 3.3.7 (Arity-matching). A map between -typed graphs and is arity-matching if for every ,the restriction of to the fixed edge neighbourhood of is a bijection onto the fixed edge neighbourhood of .
Note that,by considering as the typed graph ,we can view typing morphisms as morphisms of ,and hence apply the above terminology of arity-matching to typing morphisms.
The following is adapted from the definition of a string graph in [23], pp 89; it is equivalent to the original definition if has no variable-arity edges.
Definition 3.3.8 (String Graph). A -typed graph is a string graph if is arity-matching and each wire-vertex in has at most one incoming edge and at most one outgoing edge. The category is the full subcategory of whose objects are string graphs.
We refer to a wire-vertex of a string graph with no incoming edges as an input,and write the set of all inputs . Similarly,a wire-vertex with no outgoing edges is called an output,and the set of all such vertices is written . The inputs and outputs together form the boundary of the graph,written .
We will only consider finite string graphs and finite graphs in ,since we are only interested in graphs that are tractable for computers.
Lemma 3.3.9. If is a morphism in SGraph and is a vertex in that maps to an input (respectively output) of under ,then must be an input (respectively output) of .
Proof. Suppose is an edge of such that . Then must be ,and so cannot be an input unless is. The output case is symmetric.
It is worth noting that not every subgraph of a string graph (in ) is a string graph. In particular,if is a string graph, is a node-vertex in and is a fixed-arity edge incident to , a subgraph of that contains but not will fail to satisfy the arity-matching requirement of the typing morphism. However, the intersection and union of two string graphs are still string graphs.
Proposition 3.3.10. Let and ,both string graphs,be subgraphs of the string graph . Then and are both string graphs.
Proof. Any subgraph of a string graph satisfies the requirements about wire-vertices, so we only have to consider whether the typing morphisms are arity-matching. What is more, the typing morphism of any subgraph of a string graph must be injective at the fixed-arity neighbourhood of any of its node-vertices; we just need to consider sujectivity.
Let be a node-vertex in and let be the restriction of to the fixed-arity neighbourhood of . Let be a fixed-arity edge adjacent to in the typegraph. Then both and must have in their images,and hence so much . Thus is surjective onto the fixed-arity neighbourhood of the typegraph,and so is arity-preserving.
The argument for is similar.
Remark 3.3.11. The construction of the typegraph means that there can be no edges between node-vertices; any connection between node-vertices is mediated by wire-vertices. Since wire-vertices in string graphs have at most one input and one output,every string graph is simple,in the sense that for any two vertices of ,there is at most one edge between and in each direction.
Proposition 3.3.12. Every morphism in is arity-matching.
Proof. Let and be string graphs and a graph morphism. Consider . Since ,we know that the restriction of these maps to is identical. But is arity-matching,and so its restriction to is a bijection,and hence the same is true of . But that means that restricted to must also be a bijection,as required.
The following is adapted from a similar proof in [23], pp 90. We can conclude from it that is what Kissinger calls a partial adhesive category; we have chosen not to use this notion on the basis that it does not improve the clarity of the work.
Proposition 3.3.13. A morphism of is monic if and only if it is injective.
Proof. Since this holds in ,any injective map in must be monic in , and hence also monic in .
Suppose we have a non-injective morphism in . Then must either map two or more edges in to the same edge in or map two or more vertices in to the same vertex in . In fact,since and are both simple, must map two or more vertices in to a single vertex in . Consider the subgraph of containing . This must be the vertex,together with any incident fixed-arity edges (and the wire-vertices at the other end of those edges). Then for each ,we construct the string graph morphism that takes the single vertex in to ,and the fixed-arity edges to the correct incident edges of (to make the typing morphisms commute). Now all the are the same morphism,but the morphisms are distinct,so is not monic.
For the rest of this chapter,we fix an arbitrary compressed monoidal signature ,and objects and morphisms will be implicitly drawn from unless otherwise stated.
Wires
It will sometimes be useful to have a different view of a string graph, where we treat each connected string of edges and wire-vertices as a single unit. As the name wire-vertex suggests, we will refer to these as wires.
We use a different definition of wires (and wire homeomorphisms) to [23]; our definition has several advantages, notably allowing for proposition 3.3.17 and being able to refer to explicit wire homeomorphisms. However, despite the different approach, the two definitions are morally equivalent.
Definition 3.3.14 (Wire). A wire path of a string graph is a path in containing at least one edge, whose internal vertices are all wire-vertices, and which is not contained in a longer path of this kind.
A closed wire of is a pair such that there is a wire path of where is the set of edges in and is the set of vertices of .
An open wire of is a pair such that there is a wire path of where is the set of edges in and is the set of internal vertices of ,plus the start/end vertex if is a cycle.
Wires is the set of all open wires of a string graph .
Note that an isolated wire-vertex (with no incident edges) is not a wire path.
Proposition 3.3.15. Let be a string graph. Then two distinct wire paths either describe the same cycle or overlap only on their start and end vertices.
Proof. The maximality requirement of wire paths means that, except in the case of cycles, the start or end vertex of one path cannot be an internal vertex of another. So distinct paths with a shared component must diverge at some point. However, the requirements of a string graph mean this cannot happen at a wire-vertex, and so can only happen at the start or end vertex. Thus either the paths are identical (or, at least, describe the same cycle) or are distinct everywhere except the start and end vertices.
This means there is a one-to-one correspondence between open and closed wires. In most cases, it will not matter whether we are referring to an open or a closed wire, and so we will simply use the term wire. It also means that each wire path gives rise to only one closed wire and one open wire, and only wire paths that describe the same cycle produce the same wire.
Proposition 3.3.16. The wire paths,and hence wires,of a string graph can be sorted into the following disjoint kinds, based on their start and end vertices:
circles, where the start and end vertices are the same;
interior wires, where the start and end vertices are both node-vertices;
bare wires,where the start and end vertices are distinct wire-vertices,the start in and the end in ;
input wires,where the start vertex is a wire-vertex in and the end vertex is a node-vertex; and
output wires, where the start vertex is a node-vertex and the end vertex is a wire-vertex in .
Additionally, all the wire-vertices of a wire path have the same type, and this is called the type of the wire path (and of its derived open and closed wires).
Proof. The main thing we need to note for the categorisation is that if the path is not a cycle (and hence not a circle), the maximality requirement on a wire path requires the start or end vertex to be in Bound if it is a wire-vertex,since otherwise it could be an internal vertex of a longer wire path.
The second part is a consequence of the construction of ,which disallows edges between wire-vertices of distinct types.
For wires other than circles, we call the start vertex of the path it is derived from the source of the wire, and the end vertex its target. Closed wires include their source and target, while open wires do not.
Proposition 3.3.17. Let be a string graph. Then ,Bound and the elements of cover exactly (with no overlap).
Proof. Suppose is a wire-vertex of not in Bound . Then it must have both an incoming and an outgoing edge. These, together with their respective source and target, form a path whose internal vertices are wire-vertices. This must either be a wire path of or else be part of a longer wire path,and hence is part of an open wire of . Similarly,any edge of trivially forms a path whose internal vertices are wire-vertices,and is therefore contained in an open wire of . So these components cover .
and cannot overlap,due to the typegraph. Similarly,open wires contain no node-vertices,and so cannot overlap . Any vertex in an open wire is an internal vertex of a path,and so cannot be in Bound . Finally,proposition 3.3.15 gives us that no two open wires intersect.
Wire homeomorphisms provide a way to map between graphs that are equivalent under this view.
Definition 3.3.18 (Wire Homeomorphism). A wire homeomorphism between two string graphs and consists of three bijective type-preserving functions
such that for each wire in with a source in (resp. Bound ),the source of in is (resp. ),and similarly for targets of wires.
If there is a wire homeomorphism from to ,then and are said to be wire homeomorphic.
3.4 Valuation
In order for reasoning using string graphs to be useful, we need some way to interpret them. Node-vertices are intended to correspond to morphisms of a category, wire-vertices to objects of the category and edges to some concept of "information flow". The inputs of a graph naturally correspond in some way to the domain of the morphism represented by the graph, and the outputs to the codomain.
The construction of the value of a graph is essentially the same as the one Kissinger uses when he constructs a free traced SMC and a free compact closed category using string graphs ([23], pp 96-99, 104-112). We bring the notion of the value of a graph to the fore here, however, as we are more concerned with that than with free categories.
Our approach will be to fix an order on the inputs and outputs of a graph, using framed cospans, and then to break down the graph into elementary subgraphs that we can use a valuation (definition 3.2.7) to assign values (morphisms) to. For example, the graph(3.1)
will be broken down into
which will be assigned morphisms,say and respectively. We then take the tensor product of these morphisms
and use the trace, in the form of a contraction, to link the morphisms together in the manner indicated by the connections of the original string graph
Swap maps and dual maps are then applied as necessary to get the inputs and outputs in the right order.
3.4.1 Framed String Graphs
Order is important. In a monoidal category, the morphisms
cannot be the same, although in a symmetric monoidal category, it is possible that
In particular,consider the swap and identity morphisms for in a symmetric monoidal category
This would naturally be represented in string graph form as
However, these are the same graph (or, at least, isomorphic), and so there is no sensible way to assign one the value of and the other . In a compact closed category,we have a similar issue with the morphisms
whose graph representations should look like
To deal with this, we use framed cospans 23.
Definition 3.4.1 (Framed Cospan). A string graph frame is a triple where is a string graph consisting only of isolated wire-vertices, is a total order on ,the vertices of ,and is the signing map.
A cospan is called a framed cospan if
and are string graph frames
contains no isolated wire-vertices
the induced map restricts to an isomorphism
for every
for every
The frames can be seen as "holding the boundary in place". The signing map indicates inputs that appear in the codomain of the span, and outputs that appear in the domain, with a - . These correspond to dual objects in compact closed categories. A framed cospan where every vertex in both frames is + is called positive.
Given a string graph ,we will often use to refer to some fixed framing of it (ie: a framed cospan where the shared domain is ). The particular framing,if it is relevant,will be given by the context.
The composition of two framed cospans
is formed by the pushout
where the resulting cospan is
Remark 3.4.2. We remarked earlier that proposition 3.3.12 means that is what Kissinger calls a partial adhesive category in [23]. This, and the value construction above, form the necessary components for applying his proofs ([23], pp 104-112) that that classes of framed cospans up to wire homeomorphism form a compact closed category, and homeomorphism classes of positive framed cospans form a symmetric traced category, to our extended version of string graphs.
3.4.2 Indexings and Contractions
Before we define the value of a graph, we will need some notation. We fix a traced symmetric monoidal category .
Definition 3.4.3 (Indexing). Let be a (small) set of objects of . For an object of ,an -indexing is an indexed tensor product of the form
that is isomorphic to . We call a morphism indexed when we have fixed an -indexing and a -indexing.
For an -indexing as above,we define the map shunt to be the (unique) isomorphism constructed from the identity and swap maps of that moves the element of the -indexing to the end and leaves all others the same, as in the following diagram:
Definition 3.4.4 (Contraction). Given an indexed map ,where ,the contraction of from to ,is the morphism
A contraction of an indexed morphism yields another indexed morphism. We can therefore apply multiple contractions to the same morphism, and these contractions commute:
Proposition 3.4.5 (Kissinger). For an indexed morphism and distinct indices and ,
Proposition 3.4.6. Given - and -indexings and ,and morphisms
and
if for some and for some ,then
Proof. This follows from the definition and the laws of traces.
3.4.3 Value
Let be a symmetric traced category,and let be an expansion-order invariant valuation of over . For a framed cospan of ,we define the value of under in the following manner.
The elementary subgraphs of are those subgraphs that consist of one of the following:
a single node-vertex together with its incident edges and adjacent wire-vertices
an edge between wire-vertices, together with those wire-vertices (which may be the same vertex)
a wire-vertex that has no incident edges in
Note that these are all valid string graphs. The definition of a string graph (and, in particular, the restriction on incident edges of wire-vertices) means that:
is covered by its elementary subgraphs,
no node-vertex or edge appears in more than one elementary subgraph,
wire-vertices appear in at most two elementary subgraphs, and
any wire-vertex that does appear in two elementary subgraphs is an input in one and an output in the other.
Example 3.4.7. The string graph(3.2)
has the following four elementary subgraphs:
We then assign a value to each of these subgraphs, as well as a wire-vertex-based indexing for the domain and codomain of that value.
For an elementary subgraph of with a node-vertex ,let ,and for each incoming edge of in ,let be the size of the preimage of the edge under (ie: the number of edges adjacent to that map to it),and define similarly for each outgoing edge out . Then these are arity counts for ,since ,and hence ,is arity-matching. The value of is then .
Consider ,the domain of ,and let be the set of wire-vertices of that are the source of an incoming edge of . If is empty, . Otherwise,for each input of with adjacent edge ,let ,where . We can then choose a total order for such that is non-decreasing,and this ordered set provides an -indexing with the property that if ,then . We can construct a similar indexing for the codomain of using ,the set of targets of edges whose source is (unless this is empty).
If is instead an edge connecting two wire-vertices,we assign the value , where is the image of (and hence also of ). We set and (note that these may be the same). If is an isolated wire-vertex of type ,we similarly assign the value and set .
We then let be the tensor product of the values of each elementary subgraph of . A diagrammatic representation of this morphism for the graph (3.2) is
Let be the union of and the union of for the elementary subgraphs of . The total orders we placed on the inputs of each elementary subgraph of ,together with the order of the tensor product ,gives us an order on ,and (providing is not empty) this gives rise to a -based indexing of the domain of that agrees with the indexings we constructed for the elementary subgraphs; the same can be done for the codomain of using .
Now we order the wire-vertices of that appear in both and ,but are not isolated wire-vertices in ,and for let
For the previous example, this would result in
Note that and,conversely, . This means that,given that the isolated wire-vertices of are the only vertices to be both inputs and outputs of ,the remaining indices in the indexing of the domain of are exactly the inputs of ,and those in the indexing of the codomain are its outputs.
This means we can simply precompose and postcompose with the necessary swap maps to re-order the indexings of its domain and codomain to agree with the orderings of the frames of . We call the resulting morphism ,the value of .
This final reordering means that the choice of order of elementary subgraphs when constructing will not affect the value of . The fact that contractions commute means that the choice of order of wire-vertices in when constructing the maps does not affect the value of . Finally,since the valuation is expansion-order invariant,the choice of indexing for the domains and codomains of the elementary subgraphs of does not affect the value of .
In a compact closed category, a combination of swap maps and the maps associated with dual objects can be used to produce from .
Note that the domain and codomain of depend only on the frames,not on . Note also that the frames only affect the final part of the construction, and so the values of two different framed cospans with the same internal string graph are the same up to pre- and postcomposition with swap maps if both are positive, and swap maps and dual object maps otherwise.
We will assume all valuations are expansion-order invariant from now on.
Proposition 3.4.8. Suppose and are framings and is a wire homeomorphism such that the following diagram commutes
where the triangles on the left and right are taken from the definition of a cospan frame, and the top and bottom morphisms are order- and sign-preserving. Then,for every valuation ,
Proof. We start by noting that the diagram above means we can identify with with and with in a consistent manner.
Now we make the observation that if and are wire-homeomorphic,then there is a third graph that is wire-homeomorphic to both where no two wire-vertices are adjacent to each other. What is more, can be reached from both and by a (finite) series of 0 or more wire contractions, where an edge between two distinct wire-vertices is removed,and and are identified. Thus we just have to show that the value of a framed cospan is stable under wire contraction, and the result follows by transitivity.
So let be an edge of whose source and target are distinct wire-vertices. Then (together with its source and target) is an elementary subgraph of . If is an input of and is an output,contracting will make no difference to the value of ,as we will simply be replacing one elementary subgraph with another that has the same value, and there are no contractions in either case.
So suppose is not an input of . Then ,and there is a contraction . We can choose to place the elementary subgraph containing last and apply the contraction on its source first. will then have the form (where we only display the last swap of the contraction)
where is the tensor product of the elementary subgraphs of the contracted form of together with all but one swap from the contraction of . Then we have
by the trace axioms. The extra swaps are irrelevant to the value of the contracted form of as they are either absorbed into if is not an output,or are corrected at the end to account for the frame ordering.
The case where is not an output of is symmetric. So the value of remains the same after contracting ,as required,and hence we have the required result.
3.5 Graph Equations
Our aim is to be able to mechanise equational reasoning for diagrams using string graphs. However, we first need to establish what we even mean by a "graph equation", and develop an equational logic for this concept.
Recall that in the equational logic for terms,there are rules that state that is closed under contexts and substitutions. We can view substitutions as a sort of internal context. For example, if we say
we mean that we can replace with (or vice versa) regardless of what is either inside or surrounding that symbol. This approach is well-suited to the tree-like structure of terms, where there is a single external context and variables provide an easy way of labelling internal contexts, but does not work so well for graphs, which have only an external context, but may have multiple points of interaction with that context.
Declaring that two morphisms are equal only makes sense if both morphisms have the same type. Given our intended interpretation of the string graphs, this means that they need the same inputs and outputs (respecting types). But that is not enough; consider the associativity law for multiplication. For terms, we would have something like
while with string graphs we would want something like
Visually, the meaning of this is clear, but we need some way to encode the correspondance between inputs; after all, one way of correlating the inputs would result in a law that was simply a consequence of commutativity:
We encode the correspondance of inputs and outputs using a span of graph morphisms whose images are the boundaries of the two graphs. We place some extra coherence requirements on the span to ensure that inputs are linked to inputs and outputs to outputs.
Definition 3.5.1 (String Graph Equation; [23],pp 93). A string graph equation is a span where:
and contain no isolated wire-vertices;
and ;
; and
the following diagram commutes,where and are the coproduct inclusions into the boundary composed with the above isomorphisms:
We can assemble similar laws to those we had in the term-based equational logic. Axiom, reflexivity and symmetry are straightforward:
(SYM)
where is the obvious inclusion of the boundary of into . Transitivity requires a bit more work to express. If we have the graph equations
and
then the constraints on graph equations mean that is both well-defined and an isomorphism . If we take and ,we can express transitivity as
For this to reflect intuitive equational reasoning, though, we need to be able to apply equations in the context of larger graphs. Given graph equations
and
such that there are monomorphisms making the following diagram commute, and the squares in it pushouts,
then we have the following rule
We actually need one more rule, to account for wire homeomorphisms. Given graph equations
and
we say they are wire homeomorphic if there are wire homeomorphisms and such that(3.3)where the isomorphisms are those induced by and (see definition 3.5.1). Given two such wire homeomorphic equations, we have the following rule
We should note here that multiple stacked applications of the LEIBNIZ rule can be condensed into a single application.
Proposition 3.5.2. Suppose we have the proof tree fragment
Then
is a valid proof tree fragment.
Proof. We must have the following commuting diagram
We just need to find a string graph and suitable monomorphisms to make(3.4)
commute. We will only demonstrate how to construct the left part of the diagram, as the right side can be constructed in the same manner. First of all, and its inclusion into are formed by
pushout:
What is more, because the left square and outer squares are both pushouts, the right square must also be a pushout.
Now the left square of (3.4) can easily be seen to commute:
What is more, we have just established that the bottom square is a pushout, so the outer square must also be a pushout.
The triangle between and follows easily:
3.5.1 Soundness
We want to make sure these rules are sound with respect to the valuation we described in section 3.4. In other words, if the assumptions of one of the preceding rules holds under a particular valuation, then the conclusion must hold under the same valuation.
However, the rules above deal with string graph equations, and valuations apply to framed cospans. We need to introduce frames to both sides of a string graph equation in a way that is compatible with the span that forms the equation.
Definition 3.5.3. A framing of a string graph equation is a pair of framed cospans and such that there are morphisms and forming the coproduct (ie: disjoint union) and making the following diagram commute:
Note that it is always possible to frame a string graph equation in the following way: let be a subgraph of ,and its complement. Place an arbitrary ordering on and ,and construct signing maps so that if and only if (and hence also ) is an input,and if and only if (and hence also ) is an output. and are then frames. We set and to be the natural inclusions,and define and by the above diagram. and are then framed cospans,and so we have a framing of the string graph equation.
In addition,fixing a framed cospan for either or fixes a framing for the equation. This is because fixing and ,for example,fixes and (since is monic).
We say that a string graph equation holds under a valuation if for all framings (all positive framings if the valuation is not over a compact closed category) we have that . In fact,this is the same as saying we have this for any framing.
Proposition 3.5.4. For any valuation and any string graph equation ,if and are framings of (where both are positive unless is over a compact closed category),then if and only if .
Proof. Consider the domains and codomains of and ,indexed by the vertices of their respective frames, and .
Using the same construction as for the value,we can find a tensor product of identities and maps and composed of identities,swaps and dual maps such that
and this construction depends only on the frames of the cospans, since these are what determine the domains and codomains of the values and their indexings. But this means that we also have
since and have the same frames,and and also share their frames. Thus if , then .
The same argument works in reverse,giving us that if ,then .
Soundness of the reflexivity and symmetry axioms follows from the reflexivity and symmetry of . For transitivity,we just need to note that,given a framing of ,there is a framed cospan such that and are framings of and respectively.
Proposition 3.5.5. The LEIBNIZ rule for string graphs is sound. In other words, if we have the following commuting diagram
where the top and bottom spans are string graph equations,a framing of and framing of ,then for all valuations ,
Proof. By proposition 3.5.4, it suffices to choose a specific framing for each equation, instead of showing it for all framings.
Since all the arrows in the above diagram are monic (and hence injective), we will consider them to be subgraph relations (so is a subgraph of by ,and so on). For convenience,we will use to refer to less any isolated wire-vertices.
We start by noting that consists only of isolated wire-vertices,but and have no isolated wire-vertices. This means that,since is a pushout of and from ,each elementary subgraph of must be an elementary subgraph of exactly one of or . Similarly, (defined in section 3.4.3) is the disjoint union of and ,and likewise for . The same results hold for .
It then suffices to show that,when calculating the value of a framed cospan of ,we can choose framings and for the equations such that is of the form
and is
for some indexed morphism . The result then follows immediately.
We choose an order for the elementary subgraphs of (none of which are isolated wire-vertices), and an order for the variable-arity edges of those subgraphs. Tensoring the elementary subgraphs together in order gives us ,which can be indexed by and ,appropriately ordered. We do the same for and to get and ,indexed similarly.
Now and are valid choices for the same values for and ,and we can order using the order of followed by the order of ,and similarly for , and . These can then be used to index and ,and these indexings agree with the indexings for and .
Next,we arbitrarily order as and as . We apply the contractions in order to and to to get and respectively. At this point, we note that, by proposition 3.4.6 ,
and similarly . But if we choose the framing so that the ordering of the frames matches the ordering of and ,we have that . To get ,we may need to pre- and post-compose with swap maps to get an order that agrees with the frames. Call these and . Then .
If we order as ,a similarly suitable choice of the framing gives us
It remains to show that
We need to consider the decomposition of and into their individual swap operations. If any given swap operates on an index that is then contracted (ie: one of the ),it can be subsumed into that contraction,due to the way contractions are defined. So we can construct reordering maps and that only operate on inputs and outputs (respectively) of such that
But this is the same as
We will refer to this (indexed) morphism as .
We need to show that indexing of this morphism agrees with the order on the frames of . We will just treat the frame,as the frame is symmetric. Note that we have the following commuting diagram,by the definition of a framing,which allows us to consider a subgraph of .
We split ,which consists of the inputs of (and therefore of ) into two parts: are those vertices that are inputs of ,and are the inputs of (and hence of ). Note that we can consider a subgraph of and a subgraph of .
Suppose . Then either both are in ,both are in or and . In the latter case,we know that appears before in the indexing of the domain of due to the construction of . Suppose both are in . Then we know that appears before in the indexing of ,because that is how we chose the ordering on . Otherwise,both are in . We know that appears before in the indexing of ,and hence in the indexing of . If the same is true of the indexing of ,it must be the case that ,and hence ,does not swap the order of and ,and so appears before in the indexing of . Otherwise,if is before in the indexing of (and hence ) must swap with ,and so we still have that is before in the indexing of .
Proposition 3.5.6. The HOMEO rule for string graphs is sound. In other words, given wire-homeomorphic string graph equations and with respective framings and ,
Proof. The precondition on the wire homeomorphisms allows us to construct framings of the equation that are consistent with each other in the manner required by proposition 3.4.8. From that proposition, we get
and it follows by transitivity of that
3.6 Graph Rewriting
Graph rewriting has been around in some form since at least the 60s; it made an early appearance in graph grammars, a field that aimed to apply formal language theory to multidimensional objects rather than just linear strings, notably in [33] and [36].
In term rewriting, we replaced a subterm of a term with another term that we viewed as equivalent in some way. In a similar manner, graph rewriting (at least as far as this thesis is concerned) aims to replace a subgraph of a graph with another "equal" graph. There are several approaches to this problem,almost all of which involve a graph to be searched for and removed from the target graph
,a graph to replace it with and some embedding transformation describing how to map from the connectivity of with to edges between and .
Algebraic graph rewriting systems express using constructs from category theory,which provides a useful set of tools for reasoning about such systems. Using these systems allows us to make use of established work, such as proofs that independent rewrites (which operate on different parts of ) can be applied in either order and acheive the same result [14].
There are two main approaches to algebraic graph rewriting: single-pushout (SPO) and double-pushout (DPO). DPO was the first algebraic graph rewriting system, introduced by Ehrig, Pfender and Schneider in [15]. A rewrite rule is represented as a span of graphs
and a rewrite from to is a pair of overlapping pushouts
The embedding transformation is described by and . We can think of as being the interface of the rewrite rule,the part allowed to interact with the rest of . This part is held in place when the rest of is removed from ,allowing the connectivity with to be maintained.
SPO, as presented by Löwe in [41], generalises DPO by using a single pushout
in the category of partial graph morphisms. This allows for rewrites that would be prohibited under DPO, such as deleting vertices in unknown contexts. For example the SPO rewrite
has no equivalent DPO rewrite, because of the edge between the grey and black vertices.
We will make use of DPO as our graph rewriting system, as we do not require (and, in fact, wish to avoid) the extra flexibility of SPO.
Similar to the term case, we direct a string graph equation (definition 3.5.1) to get a string graph rewrite rule. Given a string graph rewrite rule ,a string graph rewrite is a pair of pushouts of monomorphisms in (3.5)
(which should also be familiar from the rules of section 3.5).
Note that the inclusion of into must contain in its image. This is because any input or output of in the image of must map from an input or output of ,and must therefore be in the image of . The same is true of the inclusion of into . What is more,the constraints on and mean that the preimages of and under these inclusions coincide. The same is also true for and and for and . If we let be the shared preimage of the boundaries of and ,then - providing and have no isolated wire-vertices - the span at the bottom of
is a string graph rewrite rule.
As the term "rewriting" suggests, what we actually want to do is start with the string graph rewrite rule and ,and find the remainder of the diagram. We start by finding a matching, a map (see (3.5)) such that and can be extended to a pushout; ie: they have a pushout complement.
Definition 3.6.1 (Pushout complement). A pushout complement for a pair of arrows
is an object and a pair of arrows
such that
is a pushout.
The following result is due to Dixon and Kissinger [11], based on a similar result for adhesive categories in [27].
Proposition 3.6.2. If a pair of arrows in ,where is monic,have a pushout complement, it is unique up to isomorphism.
Definition 3.6.3 (String graph matching). For a string graph and a string graph rewrite rule ,a monomorphism is called a string graph matching if has a pushout complement.
We require a stronger form of arity-matching in order to guarantee pushout complements:
Definition 3.6.4. A map between -typed graphs and is a local isomorphism if for every ,the restriction of to the edge neighbourhood of is a bijection onto the edge neighbourhood of .
Proposition 3.6.5 (Kissinger). Let be a string graph rewrite rule. Then any monic local isomorphism is a string graph matching.
So if we find a monomorphism ,we can assemble the first pushout square of (3.5): since and are monic,we can view them as expressing the subgraph relation. Then is obtained by removing from everything in the image of but not in the image of . But we still need to be able to form the second pushout.
Boundary coherence proves to be the right notion for determining when we can form pushouts in .
Definition 3.6.6. A span of morphisms in is called boundary-coherent if:
for all ,at least one of and is an input; and
for all ,at least one of and is an output.
A single morphism is called boundary-coherent if the span is.
Theorem 3.6.7 (Kissinger). A span of morphisms in has a pushout if and only if it is boundary-coherent.
Our definition of a string graph rewrite rule means that the boundary coherence of the maps in the left-hand pushout propagates to the maps on the right, meaning that if we can construct one of the pushouts in (3.5), we can always construct the other one.
Lemma 3.6.8. Let be a string graph rewrite rule and be a string graph morphism. Then and as a span are boundary-coherent if and only if and are.
Proof. This follows directly from the fact that,for all vertices in is an input (resp. output) of if and only if is an input (resp. output) of .
Corollary 3.6.9. Let be a string graph rewrite rule and a monic string graph local isomorphism. Then has an -rewrite at .
Constructing the pushout is easy: pushouts along monomorphisms in Graph (and its slice categories) are just graph unions (with an explicit intersection). Likewise, pushout complements from monomorphisms can be calculated using graph subtraction. The above results guarantee that we will get valid string graphs when we do so.
Example 3.6.10. The following string graph rewrite is an application of the string graph rewrite rule for the unit law in the calculus:
3.6.1 Equational Reasoning with Rewrites
Rewriting is typically done with respect to some set of rewrite rules ,called a (graph) rewrite system. We say that a string graph rewrites to a string graph under the graph rewrite sytem (or ) if there is a string graph rewrite rule and a string graph matching such that
is a string graph rewrite.
If we define
then implements the LEIBNIZ and AXIOM rules of the previously-presented equational logic for string graphs. In particular, string graph rewriting implements a single application of Axiom followed by a single application of LEIBNIZ (which, by proposition 3.5.2 is equivalent to any proof tree containing just Axiom and LEIBNIZ axioms).
It then follows that ,the reflexive,symmetric,transitive closure of ,implements the previously presented equational logic with HOMEO omitted.
3.6.2 Matching up to Wire Homeomorphism
There is one more rule we need to account for in our mechanisation process: HOMEO. Ideally, we would like to do rewriting "up to wire homeomorphism".
Given a string graph rewrite rule and a graph ,we say that rewrites to by up to wire homeomorphism if there are and ,wire homeomorphic to and respectively,such that rewrites to by .
We would like to find the set of graphs such that rewrites to by up to wire homeomorphism,quotiented out by wire homeomorphism (ie: so there are no two graphs in wire homeomorphic to each other).
We can eliminate duplicate wire homeomorphic graphs as a last step by settling on some standard form (such as have at most one wire-vertex on each wire other than bare wires, where we must have two). However, this does not help to ensure we have found all possible matches without trying the (likely infinite number of) graphs wire homeomorphic to . We demonstrate a technique for constraining the search space at the matching step by making a careful choice of and .
Any monomorphism from to must necessarily map wires of onto wires of of the same type in the following manner:
A circle of must map to a circle of ,in which case nothing else can map to that same circle
An interior wire of can only map to an interior wire of ,and this must be the only thing mapping to the wire.
A bare wire of can map to any wire of ,and may not be the only thing to map to that wire
An input wire of can map to either an input wire or an interior wire of ; in both cases,it may not be the only thing mapping to that wire
Output wires are similar to input wires
Note that multiple bare wires can match the same wire simultaneously
as well as matching an interior wire that has already been matched by an input and/or an output wire
or an input wire that has been matched by an input wire or an output wire that has been matched by an output wire.
Let be the number of bare wires in . We choose to be the (unique,up to isomorphism) graph that is wire homeomorphic to such that
every circle has wire-vertices on it
every interior wire has exactly wire-vertices on it
every bare wire has no wire-vertices other than the input and output
every input or output wire has no wire-vertices other than the input or output itself
We then choose to be the (unique,up to isomorphism) graph that is wire homeomorphic to such that
every circle has wire-vertices on it
every interior wire has exactly wire-vertices on it
every bare wire has exactly wire-vertices on it (including boundary vertices)
every input or output wire has exactly wire-vertices on it (including the boundary vertex)
We say that these graphs are normalised.
Any algorithm for finding graph monomorphisms (such as those in [40] or [2]) can then be used (together with a suitable filter) to find local isomorphisms from to . Now consider an arbitrary wire of .
If is a circle,it can be matched by any circle of the same type in ,since circles in both graphs have the same number of wire-vertices. Alternatively,if ,it can be matched by any or all of the bare wires of .
If is an interior wire,it can either be matched by one of the interior wires of (which have the same number of wire-vertices) or it can be matched by any combination of an input wire (consuming a single wire-vertex at one end), an output wire (similarly) and any or all of the bare wires (on the remaining wire-vertices).
If is a bare wire,it can be matched by any or all of the bare wires of .
If is an input wire,it can be matched by an input of ,which will consume one of the wire-vertices; this will still allow any or all of the bare wires of to match.
Finally,if is an output wire,it can similarly be matched by an output of and/or any or all of the bare wires of .
Rewriting using all of the matches found in this way will produce a finite collection of rewritten graphs. Contracting all the wires in these graphs so that there is only a single wire-vertex on each, except for bare wires which need two wire-vertices, and eliminating the duplicates will give us the set we originally wanted.
Remark 3.6.11. The proliferation of wire-vertices and edges between them in and when has bare wires means that this is not the most efficient way of doing matching up to wire homeomorphism. Quantomatic takes a slightly different approach,initially assuming and delaying matching bare wires from until everything else has been matched; each bare wire is then matched against an edge of ,which is replaced by two wire-vertices and three edges at this point. See section 7.2 for details.
Example 3.6.12. The graph
would be normalised to
if it were the graph from the rule and to
if it were the target graph .
Chapter 4
!-Graphs
String graph rewrite systems have a shortcoming in that they do not have a finite way of representing infinitary rewrite rules. In this chapter, we present !-graphs (pronounced "bang graphs"), an extension of string graphs, as a way of expressing certain infinite families of string graphs in a finite way. In the next chapter, we will use these to construct infinite families of string graph rewrite rules and use them to rewrite string graphs, both directly and by rewriting !-graphs.
Recall the spider laws of section 2.2 that arose from commutative Frobenius algebras (CFAs):(4.1)
In a single diagram, this describes an infinite family of equations. However, while its meaning may be clear to a human, we need a more precise encoding for a proof assistant to be able to handle equations like this. In [9], Dixon and Duncan briefly described a system of graph patterns to handle laws like this. However, they provided no mathematical foundation for these. Kissinger, Merry and Soloviev provided such a foundation in [26] (as pattern graphs), making the system more powerful in the process. This joint work with the author of this thesis, which contained no proofs, forms the basis of this chapter, although we use the term !-graphs instead of pattern graphs to avoid confusion when describing matching (where the two graphs are often called the pattern and target graphs).
!-graphs formalise the notion of repetition indicated by the ellipses and encode it in the graph itself. What will actually be encoded is a collection of labelled subgraphs called !-boxes. A !-graph can be instantiated into a string graph by removing ("killing") or duplicating ("copying") these subgraphs (including incoming and outgoing edges) some (finite) number of times. In this way, a !-graph can stand in the place of an infinite family of string graphs:
We can even express the bipartite graph from the generalised bialgebra law
by nesting !-boxes inside other !-boxes:
Section 4.4.1 goes into detail about the effect of nested and overlapping !-boxes.
4.1 Open Subgraphs
Note: when we refer to a subgraph of a string graph in this section,we mean a subgraph in - ie: the subgraph is also a string graph.
It should be clear that we cannot allow !-boxes to be arbitrary subgraphs. Copying in the following graph, for example, will lead to an invalid string graph, as there will be a wire vertex with multiple outputs:
Copying or killing a !-box must also not affect the fixed-arity edges of node-vertices, otherwise the arity-matching property of the typing morphism will be broken.
So we must restrict our !-boxes to those that can be copied. For string graphs, the correct notion is that of open subgraphs. Note that the definition here differs from, but is equivalent to, the one in [26] (pp 5).
Definition 4.1.1 (Open Subgraph). A subgraph of a string graph is said to be open if it is not adjacent to any wire-vertices or incident to any fixed-arity edges.
This is also related to boundary-coherence in SGraph:
Proposition 4.1.2. Let be a string graph,and a subgraph of in . Then is open if and only if it has a complement (in ),and the embedding of this complement into is boundary-coherent.
Proof. Suppose is open in ,and consider ,which is a graph in . Let be a node-vertex in ,and let be the restriction of to the fixed edge neighbourhood of ,and similarly for . Since is a subgraph of is a restriction of . So is a restriction of ,and hence is injective and its image consists only of fixed-arity edges of the typegraph. Suppose it is not surjective,so there is some fixed-arity edge incident to that is not in the image of . Then the preimage of under ,must not be in . Since is in ,and is full in ,the other end of ,which we will call ,must be in . But then is incident to a fixed-arity edge (namely ),and hence is not open in . So is a string graph,since (being a subgraph of ) it must have at most one incoming and one outgoing edge on each wire-vertex.
For boundary-coherence of the embedding ,we need to show that any input of is also an input of ,and likewise for outputs. But the only way an input in could not be an input in is if there is an edge in from some vertex to . Because is not in ,neither can be. So must be in ,and so is incident to a wire-vertex ,contradicting the openness of . The argument for outputs is analagous,and so we have boundary-coherence as required.
Conversely,suppose is a (string) subgraph of ,and that its complement is also a string graph with a boundary-coherent embedding into . Suppose is adjacent to a wire-vertex ,via an edge . Then is in ,but is not. If is the target of must be an input of but not of . Otherwise, is the source of and is an output of but not of . Either way, cannot be boundary-coherent. So there is no such .
Now suppose is incident to a fixed-arity edge . Let be the end of that is not in cannot be a wire-vertex,since we have already shown that cannot be adjacent to a wire-vertex. So is a node-vertex. Let . Then is in the fixed neighbourhood of ,but is not in the image of . So is not arity-matching,contradicting the assumption that is a string graph. Thus there can be no such edge ,and is open in .
The use of the term "open" brings with it the expectation that the property is preserved under unions and intersections (since we are dealing with finite graphs). The following proposition shows that this is, indeed, the case.
Proposition 4.1.3 ([26],pp 5). If are open subgraphs,and is an arbitrary subgraph,then and are open in and is open in .
Proof. Proposition 3.3.10 gives us that the graphs are all string graphs.
Suppose is adjacent to a wire-vertex. Then there is a wire-vertex in but not in adjacent to a vertex in . If is not in ,we have a contradition for the openness of . So must be in . Similarly, must be in . But then is in ,which contradicts our assumption. So there can be no such . Instead,suppose is incident to a fixed-arity edge . Again, must be in ,since cannot be incident to a fixed-arity edge,and must also be in for the same reason,and hence is in ,and so not incident to it. So there is no such and is open in .
Suppose is adjacent to a wire-vertex. Then there is a wire-vertex in but not in adjacent to a vertex in must either be in or ; WLOG assume it is in . Then,since is not in is adjacent to a wire-vertex. But this cannot be the case,since is open,so there can be no such . Similarly,if there is a fixed-arity edge incident to ,it must be incident to either or ,which cannot be the case. So there is no such ,and is open in .
Suppose there is a wire-vertex in but not in adjacent to a vertex in . Then is in but not in ,and is in ,and they are joined by an edge in . So is adjacent to a wire-vertex in ,contradicting its openness in ,so there is no such . Suppose now that is incident to a fixed-arity edge in ,and let be the vertex in it is incident to. Then is in ,and is in but not ,and so cannot be open in ,which contradicts the assumption. So there can be no such ,and is open in .
4.2 The Category of !-Graphs
The -boxes are encoded into the graph itself as a new vertex type. These -vertices act as the labels for the -boxes,and the edges from the -vertices to other graph vertices indicate the -box contents. We therefore need to extend the derived typegraph of a monoidal signature to include this new vertex.
Definition 4.2.1 (Derived Compressed !-typegraph). The derived compressed !-typegraph of a compressed monoidal signature has the same vertices as (see definition 3.3.5) together with a distinct vertex ,and the same edges as together with a self-loop on and,for each vertex of ,an edge from ! to .
Note that the definition of ensures that there can be no edges from wire- or node-vertices to !-vertices.
The derived compressed !-typegraph of the compressed monoidal signature
would be
The wire-vertex and node-vertex terminology is inherited from string graphs. In addition, for a graph in ,we call any vertex where a -vertex,and denote the set of all -vertices in as .
We alter the definition of an input slightly from the string-graph case, due to the new vertex type: a wire-vertex is an input if the only in-edges are from !-vertices.
For a -vertex ,let be its associated -box. This is the full subgraph whose vertices are the set of all of the successors of . We also define the parent graph of a !-vertex as the full subgraph of predecessors,that is,the full subgraph generated by . Note that the typegraph constrains to only contain -vertices (and edges between them).
As a notational convenience,when dealing with subgraphs of -typed graphs where the inclusion map is implicit,we will use subscripts to identify which graph we are referring to. So if is a subgraph of and is a -vertex in will be its -box in ,and will be the -box of when considered as a vertex of .
Note that,for a given monoidal signature is a subgraph of . Since Graph has pullbacks, theorem 3.3.2 gives us a full,coreflective embedding ,whose right adjoint is the forgetful functor that simply drops all -vertices.
Definition 4.2.2 (!-Graph and ; [26],pp 5). A -typed graph is called a !-graph if:
is a string graph;
the full subgraph with vertices ,denoted ,is posetal;
for all is an open subgraph of ; and
for all ,if then .
is the full subcategory of whose objects are !-graphs.
Recall that a graph is posetal if it is simple (at most one edge between any two vertices) and, when considered as a relation,forms a partial order. Note in particular that this implies (and ),by reflexivity. This partial order allows !-boxes to be nested inside each other,provided that the subgraph defined by a nested !-vertex is totally contained in the subgraph defined by its parent (condition 4).
As with string graphs, we only consider finite !-graphs in the sequel.
We introduce special notation for !-graphs. !-vertices are drawn as squares, but rather than drawing edges to all of the node-vertices and wire-vertices in ,we simply draw a box around it.
In this notation,we retain edges between distinct -vertices to indicate which -boxes are nested as opposed to simply overlapping. This distinction is important, as nested !-boxes are copied whenever their parent is copied.
We will prove a few useful results about !-graphs and about . Firstly,we have a sufficient condition for a subgraph of a !-graph to be a !-graph.
Lemma 4.2.3. Let be a !-graph and a full subgraph of such that is a string graph. Then is a !-graph.
Proof. is a string graph by assumption.
Since is full in must be a full subgraph of . But a full subgraph of a posetal graph is itself posetal,so is posetal.
Let . Then is an open subgraph of ,and , which is open in by proposition 4.1.3
Let ,with . Then we must have ,since is a !-graph. So . Since is full in ,we know that and similarly for . Then ,so .
So is a !-graph,as required.
Corollary 4.2.4. Let be a 1-graph,and a subgraph of such that is open in . Then is a !-graph.
It is also useful to know under what conditions a !-graph morphism is monic or (strongly) epic in . These conditions turn out to be the same as for .
Lemma 4.2.5. A morphism in is monic iff it is injective.
Proof. Since this holds in ,any injective map in must be monic in , and hence also monic in .
Suppose we have a non-injective morphism in . Then must either map two or more edges in to the same edge in or map two or more vertices in to the same vertex in . In fact,since and are both simple, must map two or more vertices in to a single vertex in . Consider the smallest sub-l-graph of containing . This is the vertex itself, plus a self-loop if it is a !-vertex or any fixed-arity edges (plus their terminating wire-vertices) if it is a node-vertex. Then for each ,we construct the !-graph morphism that takes the single vertex in to (this will extend to any incident edges of the vertex in in a unique way,dictated by the typing morphisms). Now all the are the same morphism,but the morphisms are distinct, so is not monic.
Lemma 4.2.6. Let be a -graph morphism. Then ,the image of ,is a -graph.
Proof. First we note that is a subgraph of ,and let be the inclusion. Then is a subgraph of the string graph . It therefore satisfies the wire-vertex conditions for string graphs,and we only need check that restricts to an arity-matching morphism. But we know that is arity-matching,and hence restricts to a bijection on the fixed-arity neighbourhood of every node-vertex in and its image in ,and is also arity-matching,and and commute. So must be arity-matching,and hence must be a string graph.
is a subgraph of . But then is simple and anti-symmetric,since is,and is reflexive and transitive,since is,and hence is posetal.
Let . Then there is a with . Now ,which is open in by proposition 4.1.3.
Let ,with . Then we must have . So . But then and similarly for ,and ,so . So is a !-graph,as required.
Lemma 4.2.7. A morphism in is a strong epimorphism iff it is surjective.
Proof. Since this holds in ,any surjective map in must be a strong epimorphism in ,and hence a strong epimorphism in .
Now suppose is a strong epimorphism. Let be the restriction of to its image (which is also a !-graph by lemma 4.2.6) and let be the inclusion of the image of in . Then is monic,so there exists a map making the following diagram commute:
But then ,so must be surjective and hence so is .
Finally,it is worth noting how and are related.
Definition 4.2.8 (Concrete Graph). A !-graph with no !-vertices is called a concrete graph.
The full subcategory of consisting of concrete graphs is,in fact,the image of under the previously-mentioned embedding functor . Concrete graphs and string graphs will therefore be considered interchangable.
Indeed,the restriction of to is a full,coreflective embedding,and its right adjoint is the restriction of to . As a result,we will also use and to refer to these restrictions; which is meant should be clear from context. We use to mean (and note that is closed under ).
We can extend the notation of definition 4.2.2 to morphisms of (and hence of ) by making their operation be the obvious restrictions. More precisely,if is the subgraph of consisting of only the !-vertex and its self-loop (with being its inclusion map), we can view as (in the terminology of theorem 3.3.2).
4.2.1 Wire Homeomorphisms
We will need to extend our notion of wire-homeomorphism to !-graphs.
A wire homeomorphism between two !-graphs and consists of four bijective type-preserving functions
such that
for each wire in with a source in (resp. Bound ),the source of in is (resp. ),and similarly for targets of wires
for each ,there is an edge in from to a vertex in (resp. ,Bound ) if and only if there is an edge in from to (resp. )
for each and each wire ,there is an edge in from to each if and only if there is an edge in from to each
This is the same as the definition for string graphs, but with a map for !-boxes and some constraints to preserve !-box containment. This means that if is a wire-homeomorphism of !-graphs,discarding will give a wire-homeomorphism .
4.3 The !-Box Operations
We have already mentioned that -boxes are intended to be subgraphs that can be copied or deleted.
The precise set of operations allowed on a !-box are the following ([26], pp 6, although we omit MERGE from that list and revisit it in section 6.5 :
These are inspired by the rules for the "bang" operation from linear logic 16. In particular, COPY corresponds to contraction, DROP to dereliction and KILL to weakening.
Definitions 4.3.1 (!-Box Operations; [26],pp 6). For a !-graph and ,the three !-box operations are defined as follows:
is defined by a pushout of inclusions in :
(4.2)
Before we show that these operations preserve the property of being a !-graph, it will be useful to adapt the notion of boundary-coherence to !-graphs. The important thing is that, in addition to the restrictions on how the morphisms interact with the boundaries of the graphs, we also need to restrict how they interact with -box containment. Note that the definition of boundary- -coherence we give here is stronger than is actually needed for producing pushouts of !-graphs; this gives us a simpler definition and simpler proofs.
Definition 4.3.2. Let be a !-graph morphism. is said to reflect !-box containment if whenever we have an edge in whose source is a !-vertex and whose target is in the image of , is also in the image of .
A span of monomorphisms in is called boundary-!-coherent if the span formed from and is boundary-coherent and both and reflect !-box containment.
As in ,a single morphism of string graphs is called boundary- -coherent if the span is.
The main purpose of this definition is to provide us with a sufficient condition for a span to have a pushout in .
Proposition 4.3.3. If a span of monomorphisms in is boundary-!-coherent, then it has a pushout in .
Proof. Suppose and are boundary-!-coherent. Since they are monomorphisms,they have a pushout in Graph :(4.3)
It suffices to show that is a !-graph.
The span formed by and is boundary-coherent,so the graph that results from pushing out this span in is in (theorem 3.6.7),and so is a string graph. But this is just ,by proposition 3.3.4
Since all the morphisms are monic (and the square commutes), we can treat them as containment relations. So we will consider and to be subgraphs of ,and to be their common subgraph.
Now we note that if we have an edge from a !-vertex to a vertex in ,and another edge incident to in ,both edges must be in or else both edges must be in . To see this,suppose not. Then one edge must be in and the other in . So must be in ,since it is in both and . But then,as reflects -box containment, must be in and hence in .
We need to show that is posetal. must be simple,since if we have two edges from a -vertex to a -vertex in ,they must both be in or both in by the above reasoning, which cannot be the case as these are !-graphs. So there are no such edges. Anti-symmetry is shown in the same way by reversing one of the edges. Reflexivity is inherited from and . Transitivity follows by considering that if we have the following pair of edges
they must both be in or both in ,and hence there must be an edge from to inherited from that graph. So is posetal.
Let ,and suppose is not open in . Then there is either a wire-vertex adjacent to ,or a fixed-arity edge incident to it. The former case is impossible,because if we have two edges such that
where is a wire-vertex,then and must both be in or both be in ,and hence there must be an edge from to . Similarly,if there is a fixed-arity edge incident to and must be both in or both in ,and hence there must be an edge from to the other end of ,and so is not incident to .
Let with ,and let . We need to show that . But the edge from to and the edge from to must both be in or both be in ,and so there must be an edge from to ,as required.
We now proceed to prove that all the !-box operations produce !-graphs. This theorem was stated, but not proved, in [26] (pp 7).
Theorem 4.3.4. Let be a -graph and . Then the -typed graphs and are all -graphs.
Proof. Corollary 4.2.4 gives us that is a !-graph,and hence its embedding into is a !-graph monomorphism. This embedding is boundary-!-coherent: since is an open subgraph of ,proposition 4.1.2 gives us that is a boundary-coherent morphism,and requirement 4 of definition 4.2.2 ensures that reflects -box containment. So we know the pushout exists in by proposition 4.3.3,and hence is a !-graph.
The DROP and KILL cases follow from lemma 4.2.3 and corollary 4.2.4 respectively.
Since one of the reaons for using string graphs is their ability to model inputs and outputs, it is useful to know how these operations affect the inputs and outputs of !-graphs. Firstly, we extend , and to include !-boxes in the following way:
Definition 4.3.5. For a -graph ,we define to be the full subgraph of containing the vertices and ; Out to be the full subgraph containing and ; and is the subgraph containing and ,and any edges with a source in and a target in either or .
These are all valid -graphs. Now we can show that the -graph operations affect the boundary in the expected manner.
Lemma 4.3.6. Let be a -graph,and ,and consider the copy operation:
Then all the maps in the pushout take inputs to inputs and outputs to outputs.
Proof. The square is symmetric, and the cases for inputs mirrors the case for outputs, so we just consider inputs for and .
That maps inputs to inputs follows from the fact that it is boundary-!-coherent.
Suppose ,and consider . If there is an edge in with ,
then cannot be in the image of ,since is an input of and is injective. So ,and hence ,must be in the image of . But then there must be a vertex in with and ,since this is a pushout. must be an input of ,since if there were an incoming edge to ,it would have to map to an incoming edge of in ,and there is no such edge. So must also be an input of ,since maps inputs to inputs. Then cannot have an incoming edge,and so cannot be in the image of . So there is no such edge , and .
Theorem 4.3.7. Let be a !-graph,and . Then
;
; and
and similarly for and .
Proof. For the case,since is a -graph,we can construct the pushout
in ,where . This is a subgraph of :
where and are the inclusions from the pushout that defines .
Now every edge and vertex in is in the image of either or ,and its preimage is in . Then it must be in the image of ,so contains all -vertices and edges between them from .
Now let be a boundary vertex of . It must have a preimage under at least one of and ,and lemma 3.3.9 gives us that any such preimage in is also a boundary vertex,and hence in ,and so in the image of . Conversely,if is a wire-vertex in the image of ,it must be in the image of a boundary vertex of under either or . Then lemma 4.3.6 gives us that is a boundary-vertex of .
If is an edge from a !-vertex to a boundary vertex in ,it must be in the image of either or ,and its preimage under these map(s) must likewise be an edge from a !-vertex to a boundary vertex in . So is in ,and hence in the image of ,and we have that
For the remaining cases,we note that . It then follows that
and similarly for . Then the result for follows from the fact that is an open subgraph of ,and DROP from the fact that it does not affect .
The arguments for and are almost identical.
4.4 Matching String Graphs With !-Graphs
A -graph can be considered to be a pattern for string graphs,similar in spirit to a regular expression, although less powerful. Implicit in the term "pattern", though, is that there should be a notion of matching. We already have a notion of what it means for a string graph to match another string graph: the first string graph must be a subgraph of the second (with some additional constraints). So what does it mean for a !-graph to match a string graph?
If is a !-graph,an obvious candidate for a string graph that should be matched by is , the concrete part of the graph. Recalling the discussion at the start of the chapter, our aim is to match the concrete part of -graphs with "any number of copies" of each -box in .
This is where the !-box operations come in. and provide a way of getting "any number of copies" (including zero) of the !-box . DROP then provides a way of producing a concrete graph (which we have already noted is essentially the same as a string graph). These provide the a notion of instantiation, by which we can produce a string graph from a !-graph.
Note that this definition is slightly different to the one in [26] (pp 7). The notions of instance and instantiation in that paper correspond to concrete instance and concrete instantiation here.
Definition 4.4.1 (Instantiation). For -graphs ,we let ,and say is an instance of ,if and only if can be obtained from (up to isomorphism) by applying the operations from definition 4.3.1 zero or more times. This sequence of operations is called an instantiation of from . If is a concrete graph,it is called a concrete instance of ,and any instantiation of it is also called concrete.
Remark 4.4.2. Given an instantiation of from ,we will sometimes use the notation to refer to . Note that is not really a function: because the first operation of refers to a 1-vertex of cannot be applied to anything other than (although we can sometimes relax this constraint in the presence of certain morphisms, as we will see in later chapters).
We can then say that a !-graph matches a string graph if and only if there is a concrete instance of that matches :
Definition 4.4.3 (Matching; [26],pp 8). Let be a !-graph,and a string graph. If there is a concrete instance of ,with instantiation ,and a monic local isomorphism is said to match at under ,and is said to be a matching of onto under .
Note that all the !-box operations yield a pattern that is at least as specific; if we apply a !-box operation to a !-graph to get another !-graph ,any string graph matched by will also be matched by . can therefore be viewed as a refinement (pre-)ordering on !-graphs.
Given a !-graph that satisfies a couple of constraints on !-boxes that ensure there are only ever a finite number of matchings onto any given graph (for example, there can be no empty !-boxes), it is possible to decide for any string graph whether there is a matching of onto . An algorithm for this (that also produces the required instantiation) is given in chapter 7
Example 4.4.4. Suppose we wish to use a rewrite rule implementing the spider law to rewrite
We start by normalising the target graph:
The highlighted wire-vertices are the ones that need to be matched by vertices that are in !-boxes in the LHS of the rule. So we create four copies of and two each of and
then kill and drop all the remaining !-boxes
This corresponds to the instantiation
where,as a notational convenience,we refer to the copies of in as and (in abitrary order). The resulting rule matches most of the graph:
and rewriting (and removing extraneous wire-vertices) results in
4.4.1 Nested and Overlapping !-boxes
The effect of edges between -vertices on the -box operations deserves special attention. These edges can be thought of as indicating a parent-child relation, with the edge going from the parent !-box to the child !-box (although this view does not entirely hold up when we consider the self-loop that exists on each !-vertex).
The edges from a !-vertex determine which other vertices should be copied or killed when the !-vertex has COPY or KILL,respectively,applied to it. So an edge from a !-vertex to another !-vertex means that will also copy (and its associated !-box).
An example that demonstrates the difference that an edge between !-vertices makes is the following:
This will match any tree of white nodes of depth at most two. However, if we remove the edge between the !-vertices, we get only the balanced trees.
The reason for this is that when we copy the outer !-box in the case where there is an edge, we also copy the inner !-box.
This produces a separate -box round each second-level node,allowing different numbers of nodes on different branches (eg: one could copy and kill ). In the case without an edge,however, copying will simply extend ,and every copy of will have exactly one second-level node in each first-level branch of the tree.
In this case,copying or killing would have the same effect on every branch of the tree,ensuring that it remains balanced.
Example 4.4.5. Recall the generalised bialgebra graph equation from the start of the chapter: Ignoring, for the moment, the details of the structure used to represent this equation (which we detail in section 5.2) we can (informally) show that the spiderised bialgebra law is an instance of this. First,we apply :
Next we apply . At this point,our shorthand !-box notation causes more confusion than it alleviates, so we use a hybrid notation where we show some of the !-box containment edges explicitly.
Finally, we DROP all the top-level !-vertices, giving us the spiderised bialgebra law.
If the !-boxes had not been nested, we would have ended up with the weaker statement
4.5 Related Work
Boneva et al describe 4 a system of graph shapes to generalise over classes of graphs in the context of model checking. Their approach is similar to the typegraph constructions we introduced for string graphs; a shape is a graph with some multiplicity constraints on edges and vertices,and a graph is a concretisation of a shape if there is a morphism from to satisfying those constraints (for example,a node of could require that exactly 3 nodes of map to it,or an edge could require more than 5 edges to match it). There is a shape-wide degree of approximation in these multiplicities.
The aim of graph shapes is different from that of !-graphs, even though both attempt to capture families of graphs in a single graph, and this leads to some substantial differences between them. For example, graph shapes are mostly local (although there is some weak grouping of nodes); multiplicities, in particular, are specified on a per-edge or per-node basis. !-graphs, by contrast, operate on arbitrarily-large subgraphs.
Taentzer introduced the idea of amalgamated graph transformations in [39]. These provide a framework for applying multiple, potentially overlapping, rewrite rules in parallel by identifying common sub-rules. For example, one could amalgamate two copies of the string graph rewrite rule
using the common sub-rule
to produce the familier rule
Doing this an arbitrary number of times would allow the construction of any one of the rules we intend
to represent. However, we cannot allow arbitrary amalgamations of this sort. While
are both valid rules,
certainly is not.
Rensink's nested graph transformation rules [34, 35] extend this amalgamation idea using trees of rewrite rules, each rule being a subrule of its children. A predicate language over the left-hand side of these rules allows finer control of the matching process (and, by using a tree of graphs rather than of rewrite rules, the expression of complex graph predicates).
While the predicate language is unnecessary to codify spider-like equations, the idea of rule trees could be used as a way of encoding the !-graphs we described above, even including the nesting of !-boxes. For example, using Rensink's notation, we could describe the spidered version of X copies using the nested graph transformation rule
The fact that an instance of this tree can have any number of copies of any given branch (including none) gives us the family of rewrite rules we desire. However, this technique could not be used to encode the generalised bialgebra law, as the tree structure does not allow for overlapping !-boxes, and hence cannot express arbitrary bipartite graphs. The approach we take (encoding !-boxes in the graph itself) also allows us to use graph rewriting to reason directly about !-graphs, as we will see in later chapters.
Stallmann produced an extension to the UML-based Story Patterns employed by FuJaBa 30 in [38]. These enhanced Story Patterns resemble Rensink's nested graph transformation rules in character, and resemble !-graphs in presentation. They allow parts of a Story Pattern to be marked with modifiers familiar from predicate logic,such as , or . For example,the following enhanced Story Pattern captures the requirement that every instance of class B is contained in an instance of class A:
Chapter 5
!-Graph Rewriting
!-graphs provide the foundation for representing infinite families of string graph equations and rewrite rules. In this chapter, we will demonstrate the construction of equations and rewrite rules composed of !-graphs and show how they can be used to rewrite string graphs, producing new string graph equations, and even !-graphs, producing new !-graph equations. Importantly, the latter is sound with respect to the interpretation of !-graph equations as families of string graph equations.
The first two sections of this chapter are again based on [26] (pp 9-11). The other two sections have not previously been published.
5.1 !-Graph Equations
We can combine -graphs to form -graph equations in much the same way we did string graph equations. We will extend the !-box operations (and hence the idea of instantiation) to these equations, and this will determine the string graph equations represented by a given -graph equation.
We can view a string graph equation as a pair of string graphs with correlated inputs and outputs. In the same way,we want a -graph equation to encode a pair of -graphs and a correlation of inputs, outputs and -vertices. For example,the spider law given in (4.1) could be represented as(5.1)
If we and ,we get the following equation.
The rest of this section will be devoted to formalising these constructions and showing that we can apply the !-box operations to them.
We will use a span of monomorphisms ,where equates Bound with . Before we get to the technical definition, we need a short lemma.
Lemma 5.1.1. If is a !-graph such that has no isolated wire-vertices,then we have the following pushout:
Proof. If has no isolated wire-vertices, and are disjoint. The result then follows from the definitions and the fact that pushouts are unions in .
The definition of a !-graph equation we give here looks quite different from the definition of a rewrite pattern given in [26] (pp 9). We are using this formulation to better demonstrate the similarities with string graph equations (definition 3.5.1), and because it is easier to use when constructing proofs.
Definition 5.1.2 (!-graph equation). A !-graph equation is a span in where
and contain no isolated wire-vertices;
and ;
; and
the following diagram commutes,where and are the inclusions from lemma 5.1.1 composed with the above isomorphisms:(5.2)
A !-graph rewrite rule is a directed !-graph equation.
It can be seen by comparing the definitions that applying to a 1-graph equation will yield a string graph equation, and similarly for rewrite rules.
Note that the isomorphisms in the above definition are forced to agree with and . In particular, lemma 5.1.1 can be used to show that the following diagram commutes(5.3)
and if we expand the definitions of and in (5.2),we get(5.4)
We will now extend the -box operations to -graph equations. The intuitive rule is that the same operation must be performed on all three graphs in the span, with the correspondence between !-boxes determined by the morphisms of the span. Of course, to get an equation, we need some way of updating the morphisms of the span in addition to the graphs.
It turns out we can do this with any monomorphism of -graphs that reflects -box containment:
Lemma 5.1.3. Let be a !-graph monomorphism that reflects !-box containment,and let be a !-vertex in . Then there are !-graph monomorphisms
that reflect -box containment and that commute with in the following ways: (5.6) (5.7)where and and are the maps from the pushout defining :
and and are defined similarly.
These maps are all the unique ones satisfying these diagrams, up to unique isomorphism.
Proof. Let and . Note that the image of under is contained in ,since is injective (so no vertex of other than maps to ). So there is a unique monomorphism satisfying (5.5). reflects !-box containment,since if ,and .
Now we deal with . Let and . We start by showing that the image of under is contained in . These are full subgraphs of and , respectively,so we only need to consider vertices. Let be a vertex in . So is not in . Then, since reflects -box containment, is not in ,and so is in . So we can construct the monomorphism by simply restricting the domain and codomain of . Since the inclusion of into is monic,this morphism must be the unique one satisfying (5.6).
Now we show that reflects -box containment. Let be a vertex of and ,with . Then ,and so and the edge from to are both in the image of . Further, cannot be in ,since it is in . So the preimage of cannot be in , and hence must be in ,along with the joining edge,and so and the edge from to are in the image of .
Finally,we consider . Let and . We demonstrate the existance of by pushout. If we let and be the natural inclusion maps, (5.6) gives us that
and there is then a unique making the following diagram commute: Since all the other arrows in the diagram are monic, must be as well. For a given choice of and (relative to and ),it is the unique morphism satisfying (5.7). If we do swap and , we can note that the pushout is symmetric, and pushouts are unique up to a unique isomorphism, so is unique up to a unique isomorphism.
We must show that reflects -box containment. Suppose is a vertex in and , where is a !-vertex in ,with the edge from to . is in the image of one of the ; we call the preimage of under it ,and its source (which maps to ) must also be in the image of ,and its preimage must be in the image of . What is more,the preimage of this under must map to by ,due to (5.7).
We know that,since reflects -box containment, must be in the image of . But this means that must be in the image of ,and hence also reflects !-box containment. So then we just let
Given a !-graph equation ,we know that and are isomorphisms when their codomains are restricted to and ,respectively. This means that they must reflect -box containment,so we can now make the following definitions.
Definition 5.1.4 ([26],pp 10). Let be a !-graph equation defined by the span , and let ,with and . Then the !-box operations on !-graphs have the following equivalents on !-graph equations:
is defined to be
is the span
is the span
In the case of -graph rewrite rules,the direction is preserved in the obvious way.
Note that the equivalent definitions in [26] are the same; the definition here appears simpler because we have made use of lemma 5.1.3.
We must, of course, ensure that these operations produce !-graph equations. We start with some useful results about the operations of lemma 5.1.3.
Lemma 5.1.5. All the operations from lemma 5.1.3 respect composition, at least up to unique isomorphism. For example,
Proof. Consider the case of DROP. Suppose we have !-box-reflecting monomorphisms
and suppose . Let and so on. Then we just consider the diagram
But is the unique map from to that makes the outside edges of that diagram commute,and so must be the same as . The same argument holds for .
A similar argument can be used for COPY, but care must be taken, as the result of applying COPY to a map is only unique up to isomorphism. It is possible, however, to make the relevant choices such that ,which is sufficient to get the result we want.
Lemma 5.1.6. All the operations from lemma 5.1.3 preserve isomorphisms.
Proof. This follows from lemma 5.1.5 and the fact that and are the identity, and is an isomorphism of .
Lemma 5.1.7. Let be a graph with no isolated vertices. Then and do not have any isolated vertices.
Proof. If were an isolated wire-vertex in ,it must be the image of a map from ,but does not contain any isolated wire-vertices,and so the map must carry an incident edge with it.
is a full subgraph of and is not adjacent to any wire-vertices,so any wire-vertex in must have the same incident edges as the corresponding wire-vertex in .
is isomorphic to ,and so has no isolated wire-vertices.
Theorem 5.1.8. Let be a -graph equation. Then applying any of the -box operations of definition 5.1.4 yields another !-graph equation.
Proof. Let be a !-graph equation,and be the result of applying one of the !-box operations to it. We will demonstrate the proof for ,but all of the arguments apply to and as well.
It is clear from lemma 5.1.3 that this is a span of !-graphs.
and do not contain any isolated wire-vertices,by lemma 5.1.7
Since is a !-graph equation,we have isomorphisms
These all preserve !-box containment, and hence we have isomorphisms (by lemma 5.1.6 and theorem 4.3.7)
Finally, we need to show that the following diagram commutes:(5.8)
where is composed with the inclusion of into Bound ,and similarly for and .
The proofs for the four triangles on the left and right are all similar, so we will demonstrate the top left triangle. Consider the following diagram:
(5.9)
where is the normal inclusion map. The bottom triangle commutes by lemma 5.1.5, since by our assumption that is a 1-graph equation. That the upper right triangle commutes can be seen from the proof of theorem 4.3.7. For the upper left triangle, consider the following:
where we have abused to refer to the inclusion of into . Now we can again use the proof of theorem 4.3.7 to see that the square on the left commutes, the triangle on the right commutes by lemma 5.1.5 and the definition of ,and the bottom triangle is simply the definition of . But now is composed with the inclusion on the left of the diagram,by definition,and so we have that the top left triangle of (5.9) commutes, and hence the upper left triangle of (5.8) commutes.
We now look at the upper middle triangle of (5.8); the lower middle triangle follows similarly. Consider the following diagram:
We have already demonstrated that the triangle on the left commutes, and the triangle on the right follows in a similar manner. The top square is just the definition of ,and the outside edges of the diagram commute by lemma 5.1.5 and the assumption that is a !-graph equation. It follows that the triangle at the bottom commutes, and this is the top middle triangle of (5.8).
Hence we have shown that is,indeed,a !-graph equation.
Corollary 5.1.9 ([26],pp 11). Let be a !-graph rewrite rule. Then applying any of the !-box operations of definition 5.1.4 yields another !-graph rewrite rule.
The concept of instantiation from definition 4.4.1 then extends to !-graph equations and rewrite rules in the obvious way. We will consider a !-graph equation to hold if and only if all its concrete instantiations hold.
5.2 Rewriting String Graphs with !-Graph Equations
For a !-graph equation and a string graph ,we want to use the directed form of the equation,
,to rewrite ,and the resulting graph should be a valid rewrite of in the (usually infinite) string graph rewrite system comprised of the concrete instantiations of .
Because applying a !-box operation to a !-graph rewrite rule also applies it to the components of that rule, and the LHS in particular, we can use the notion of matching from section 4.4 to find a concrete instance of the rewrite rule that can be applied to a given string graph .
Suppose we have a !-graph rewrite rule and a string graph . If is a matching of onto under ,we can apply to to get a string graph rewrite rule . By considering definition 5.1.4,we can see that is the string graph obtained by instantiating with . So we have a matching from to ,and can therefore rewrite with .
Providing finding the instantiation and matching is decidable (which we demonstrate in chapter [7],rewriting with is decidable,and our requirement that this is a valid rewrite under the rewrite system of instantiations of is satisfied.
So we have acheived our aim of expressing certain infinite families of string graph rewrite rules in a finite (and useful) way. In the next section, we will consider how !-graph rewrite rules can actually be used to rewrite !-graphs, and thereby derive new !-graph rewrite rules.
5.3 Rewriting -Graphs With -Graph Equations
Let be a !-graph monomorphism. Then we have the following pushout complement
in if and only if satisfies the no-dangling-edges condition. This condition is that for every vertex in ,and every edge in incident to is in the image of . This is stronger than being a local isomorphism,as it also requires that if a !-vertex and a vertex in are both in the image of ,any edge between them must also be in the image of .
Proposition 5.3.1. If the pushout complement (5.10) exists, is a !-graph.
Proof. As all the morphisms in the diagram are monic,we consider and to be subgraphs of , and their common subgraph.
Consider applying to the pushout square. This is a pushout in ,by proposition 3.3.4 But pushout complements are unique up to isomorphism in ,and we know that the pushout complement of
is a string graph (since is a local isomorphism,by the no-dangling-edges condition),so must be a string graph.
We apply to the pushout square,giving us another pushout (again,by proposition 3.3.4). Then,
since ,we must have that ,and hence must be posetal.
Let . Then . We know that is open in ,and , so . But this is open in by proposition 4.1.3
Let ,with ,and let . Then and ,and so . The edge linking to must either be in or . suppose it is in . Then so must and be,and so and must be in . But then the edge linking them must also be in (by definition of Bound!) and hence in . So .
So is a !-graph.
Note that, while the no-dangling-edges condition will always produce a pushout complement, it will not always lead to a valid rewrite. For example, in the following commutative diagram in Graph ,the map on the left satisfies the no-dangling-edges condition,but the resulting (bottom right) graph is not a valid !-graph,as is not an open subgraph.(5.11)
Definition 5.3.2 (!-graph matching). A monomorphism between two !-graphs is a !-graph matching when is a local isomorphism and reflects !-box containment. If ,with instantiation is said to be a !-graph matching of onto under .
Of course,if is a -graph matching,then and the inclusion of in have a pushout complement. But, unlike in (5.11), a !-graph matching always produces a rewrite.
Theorem 5.3.3. Let be a !-graph rewrite rule and be a !-graph matching for it. Then has a rewrite at .
Proof. Let be the !-graph rewrite rule,and let
be the pushout complement of and . Then it is sufficient (by proposition 4.3.3) to show that the span formed from and is boundary-!-coherent.
Theorem 3.6.7 gives us that and are boundary-coherent,and then lemma 3.6.8 gives us that and must also be boundary-coherent.
reflects -box containment by the definition of a -graph rewrite rule. Let be an edge in from a !-vertex to a vertex ,where is in the image of . Then must be in the image of , and so must be in the image of ,since reflects !-box containment. But then,since this is a pushout square, must be in the image of as required,and so also reflects -box containment. Thus and are boundary-!-coherent,as required.
Example 5.3.4. Recall the following proof from section 2.2
Consider the left equality. We can perform the equivalent rewrite by using instances of the and spider laws
We can use the first of these to construct the following rewrite (ignoring issues of wire homeomorphism):
is a local isomorphism,and can be seen to reflect !-box containment (for ,in this case). As expected,we get a valid -graph at the end. One more application of the spider law and two of the spider law will produce the chain of rewrites we want.
Suppose we have the following rewrite (where is not necessarily a !-graph matching):
The bottom span will almost certainly not be a !-graph rewrite rule (as will not,in general, have only isolated points), but it is trivial to produce such a rewrite rule by discarding unwanted vertices and edges from .
Theorem 5.3.5. In (5.12),if has no isolated points then there is a subgraph of such that
is a !-graph rewrite rule.
Proof. We start by showing that is in the image of .
and are isomorphisms since and form a !-graph rewrite rule. It then follows from the pushout squares that and are also isomorphisms. So we must have that is in the image of .
If is in the boundary of ,it is either not in the image of ,in which case it must be in the image of ,or its preimage under is in the boundary of (since string graph monomorphisms reflect boundaries),in which case this preimage is in the image of and hence is in the image of . Now if and there is an edge from to in ,we also know that if is not in the image of ,it must be in the image of . So suppose is in the image of . Then its preimage must be in the image of by the same reasoning as above,and hence is in the image of . So Bound is in the image of .
We can use the same argument to show that is in the image of .
Let be the preimage of under ,and the inclusion of in is monic,and hence injective,so we have .
We already know that the image of under is ,since and are both isomorphisms. We now show that,for all vertices in is an input (resp. output) if and only if is an input (resp. output). This will then give us that the image of is exactly (since we already know that is in that image).
So let be a wire-vertex in . We know that is either an input or an output. Suppose it is an input (the output case is symmetric). Then must also be an input,and if is in the image of ,its preimage must be an input. Let be the preimage of under . Then we know that and hence ,and is an input (since is). So the preimages of under both and are inputs,and hence must also be an input. If is not in the image of ,then is not in the image of ,and hence is not in the image of . Thus cannot introduce an incoming edge for ,which must therefore be an input.
Injectivity of gives us the isomorphism . This restricts to and ,as we just proved that is an input if and only if is an input. The diagram from definition 5.1.2 commutes by construction, and hence
is a !-graph rewrite rule.
Example 5.3.6. Continuing example 5.3.4 we get the following commutative diagram:
Thus we have the !-graph rewrite rule
5.4 Soundness of !-Graph Rewriting
We have shown how we can rewrite !-graphs using !-graph rewrite rules, and in doing so produce further !-graph rewrite rules. However, we also want to ensure that the resulting rewrite rule is sound with respect to its interpretation as a family of string graph rewrite rules.
Note that this differs from soundness of string graph equations or rewrite rules, as discussed in chapter 3 where soundness was with respect to a particular valuation. Instead, we are interested in soundness with respect to a particular rewrite sytem of string graphs.
Specifically,if we have a !-graph rewrite rule which has been used to rewrite a !-graph to another -graph ,and there is an instance of the resulting -graph rewrite rule that can rewrite the string graph to ,we require there to be an instance of that also rewrites to .
Luckily, not only does our definition of !-graph matching guarantee that we get a rewrite, it also ensures that we can preserve the rewrite under !-box operations. We know that the top span and ) share -vertices (in that we get isomorphisms if we apply to the maps),and the same is true of the bottom span . For notational convenience, we will identify the !-vertices across each of these spans (eg: if is a !-vertex in ,we will consider to be in as well,by which we mean ).
Further,because the maps and of (5.12) are monic,we can identify each !-vertex in the top span with one in the bottom span. So, given that we wish to perform a !-box operation (which we will denote ) on and propagate it to the entire rewrite,we have two possible cases: either the !-vertex being operated on is in the image of ,in which case we consider it as existing across the whole rewrite, or it is not, in which case it is only in the bottom span.
If is in the image of ,lemma 5.1.3 provides us with a way of updating all the maps of the rewrite:
We will, of course, need to check that this is still a rewrite.
If is not in the image of ,lemma 5.1.3 is insufficient. However,the following lemma will allow us to extend -box operations on maps to -vertices that are not in the image of the map.
Lemma 5.4.1. Let be a !-graph monomorphism that reflects !-box containment,and let be a -vertex in but not in the image of . Then there are -graph monomorphisms
that reflect -box containment and that commute with in the following ways:(5.14) (5.16)
where and and are the maps from the pushout that defines .
Proof. Since is not in the image of (and so neither are any of its edges),we let be ,but with codomain . Simply restricting the codomain cannot break the property of reflecting -box containment,so this is inherited from .
follows in the same way once we note that,since reflects -box containment,the image of cannot intersect .
Recall the definition of :
Let be the induced inclusion of into (equal to both composed maps in the diagram). Now we define
and so we have a monomorphism that satisfies (5.16). For the remainder of this proof, we will use to refer to for convenience.
Let be a vertex in and an edge from a !-vertex to in . must be in the image of at least one of the . Then (one of) its preimage(s) is an edge from a !-vertex to in ,and so must be in the image of (as reflects !-box containment). So is in the image of ,and hence also reflects -box containment.
Armed with this result,if is not in the image of ,we can do the following:
which we again need to show is a rewrite. The first step (for both (5.13) and (5.17)) is to show that the -box operations preserve the property of being a -graph matching.
Lemma 5.4.2. Let be a -graph matching and let be a -vertex in . Then , and are all -graph matchings.
Proof. Lemmas 5.4.1 and 5.1.3 give us that they are monomorphisms that reflect !-box containment.
So we just need to show that they are local isomorphisms.
,which we already know to be a local isomorphism.
If is not in the image of ,we note that since reflects -box containment,the image of cannot intersect . Then,since simply restricting the codomain cannot break the property of being a local isomorphism, must be a local isomorphism,since is.
Otherwise,suppose is in the image of ,and is not a local isomorphism. Let be a node-vertex in the image of with an incident edge not in its image. We know that is in the image of ; let its preimage be . Then the end of that does not map to must be in ; call this vertex . But,since ,we must have that ,and so cannot be in and hence there can be no such . So is a local isomorphism when is.
Now we just have to deal with . For convenience,we will call this for the remainder of the proof.
Suppose is not in the image of ,and is not a local isomorphism. So there is a node-vertex in such that is incident to an edge not in the image of . must be in the image of at least one of the . Call (one of) its preimage(s) . Then must not be in the image of (since otherwise would be in the image of ). But is incident to ,and so is not a local isomorphism. Contrapositively,if is is a local isomorphism,then so is .
Now suppose is in the image of . Let be a vertex in with an edge incident to in is in the image of one of the ; call the preimage of under it must also be in the image of ,and its preimage must be in the image of . What is more,the preimage of this under ,which we will call ,must map to by .
We know that,since is a local isomorphism,all the incident edges of must be in the image of ,including . So is in the image of ,and hence in the image of . So is in the image of ,and hence is a local isomorphism.
We know that the two squares of (5.13) commute (up to an isomorphism) by lemma 5.1.5 For (5.17), we need the following lemma:
Lemma 5.4.3. If and are !-graph monophisms that reflect
!-box containment,and is a !-vertex of not in the image of then
and
and similarly for KILL and COPY.
Proof. We can deduce the first DROP case from the following diagram, where the right and outer triangles commute by definition, and hence the left triangle commutes, since the inclusion map on the right is monic.
The second drop case follows in the same way from
and the KILL cases use the same argument.
For COPY,we recall that is defined to be ,where is the inclusion map induced by the pushout square that defined . We know that this is then ,which is as required.
From the proof of 5.1.3 we note that
where is defined similarly to . Then we have
From this, we can deduce that the two squares of (5.17) commute. All we need now is that both squares in each diagram are pushouts. For this, we recall that pushouts of graphs are just unions of graphs. So a commuting square of monomorphisms
is a pushout if and only if and cover and the image of is exactly the intersection of the images of and . To get that this is the case for (5.17),we use the following lemma.
Lemma 5.4.4. Let be a !-graph monomorphism that reflects !-box containment,and let . If is the normal inclusion,then for any vertex or edge of is in the image of if and only is in the image of ,and similarly for ,and if is one of the pushout maps from the definition of , for any vertex or edge of is in the image of if and only if is in the image of .
Proof. If is not in the image of ,the arguments for all the operations are very similar; we will use DROP as the example. We know that the following diagram commutes (from lemma 5.4.1): Let be a vertex or edge of . If is in the image of must be in the image of by commutativity of the diagram. Conversely,if is in the image of ,its preimage must map to under ,since is monic (and hence injective) and the diagram commutes.
So suppose is in the image of ,and let be its preimage. We will start with the DROP case again. We have the following diagram (from lemma 5.1.3):
If is a vertex or edge of that is in the image of ,then is in the image of ,by commutativity of the diagram. For the converse,we will consider vertices and edges separately.
Suppose is a vertex of with in the image of . Since is monic,there is a unique with . Now cannot be ,so cannot be ,and hence is in the image of . But then the preimage of under must map to by ,since all the maps are monic and the diagram commutes,so is in the image of .
Now suppose is an edge of with in the image of . Again,there is a unique preimage under ,and the same must be true of the endpoints of the edge. So,by what we have already proved,the endpoints of must be in the image of ,and hence must be (since is full in ),and we have what we require by commutativity of the diagram.
The KILL case is almost identical,except that in the vertex case,we must note that is not in ,as is not in .
The COPY case is also similar. This time we have the following diagram (actually two diagrams, with .
As in the DROP case,if is a vertex or edge of that is in the image of ,then is in the image of .
Suppose is a vertex of with in the image of ,which we will denote (the case for is similar). We then have a vertex of that maps to by . Now is in the image of at least one of and . If is in the image of ,consider its preimage in ,and hence and are both in the image of the inclusion ,by the pushout that defines ,with a common preimage. But this means they must,in fact,be the same vertex,and hence is in the image of . The only other option is that is in the image of ; we will again call the preimage . But then we have by the above diagram of monomorphisms,and so is in the image of .
Theorem 5.4.5. Consider the following rewrite,where is a !-graph matching,
and let be a !-vertex of (we will also consider it to be a !-vertex of and ). Then for any !-box operation on ,if is not in the image of ,the following is a rewrite
and if is in the image of ,we consider it to be a !-vertex of and ,and the following is a rewrite:
Proof. We have already established that both of these are commuting diagrams of !-graph monomor-phisms,that the top span in each diagram is a !-graph rewrite rule and that is a !-graph matching. All we have left to show is that all the squares are pushouts.
We will first consider the case. Let be the inclusion map,and similarly for the other -graphs. Suppose we have a vertex or edge in . Then must be in the image of at least one of or ,and so must (by lemma 5.4.4) be in the image of at least one of or . So these maps cover . If is in the image of both these maps, must be in the image of both and . Let be the preimage of under . Then is the preimage of under ,which is in the image of ,since this is a pushout of graphs. But then applying lemma 5.4.4 again gives us that must be in the image of . Then commutativity of the diagrams and the fact that all the maps are monic gives us that the preimage of under must be in the image of or (depending on whether is in the image of ). So and intersect exactly on the part of the graph mapped to from (or ),and so the left part of the diagram is a pushout. The same argument applies to the right squares,and so we have the result for . follows in the same way.
The argument for is similar; here we have to note that if we have in ,it must be in the image of at least one of the pushout maps from that define . We can call this map and its preimage . Then,as before,we use lemma 5.4.4 to get that is the in the image of at least one of or ,since is in the image of at least one of or . Similarly,if is in the image of both maps,it must be the image of something in or (depending on whether is in the image of ) since then is in the image of both and and is hence the image of something in .
In the above theorem,let be the induced !-graph rewrite of the span in the rewrite. It should be clear that the rewrite rule induced by the span
is exactly the result of applying to . So we have that if is a !-graph rewrite derived from a rewrite is derived from either or ,and hence have the following corollary:
Corollary 5.4.6. Suppose is a !-graph rewrite rule that rewrites to at a !-graph matching,and is the associated -graph rewrite rule. Then for any concrete instance of this rule,there is a concrete instance of that rewrites to ,and the associated string graph rewrite rule is a concrete instance of .
This is the semantics of rewriting -graphs using -graph rewrite rules: that the rewrite holds for all concrete instances of the rewritten graphs. In this way, just as !-graph rewrite rules allowed us to represent inifinitely many string graph rewrite rules at once, !-graph rewriting allows us to do infinitely many string graph rewrites at once.
It is important to note that if we can apply an instantiation to a rewrite, we can apply it to a sequence of rewrites. If we consider two consecutive rewrites
then we know that and contain the same -vertices,and the same goes for and . If we can instantiate to ,applying the same instantiation to will produce a rewrite from to some graph ,so we will have a rewrite sequence
If we perform and ,we get the following rewrite:
Chapter 6
A Logic of -Graphs
In this chapter, we will use the results of chapter 5 to build a logic of !-graphs that can be implemented by a computer, using graph rewriting as the core operation. This will include an equational logic much like that in section 3.5 together with some inference rules that build on that foundation.
In the term world, these rules would be based on predicates involving logical operators rather than purely on equations (although an equation is, of course, a type of predicate). For example, a -introduction rule might state that if you can provide a proof of a predicate and a proof of a predicate ,you can deduce that there is a proof of the predicate . This would be written
Another common rule is induction,where is a predicate over the natural numbers (or some other well-founded set):
Our graphical language does not contain logical connectives, nor does it contain the natural numbers used in the above induction rule. However, !-boxes do induce a well-founded structure amenable to a graphical analogue of induction. In this chapter, we will present this and other rules, and prove their soundness.
6.1 !-Graph Equational Logic
Let be a set of !-graph equations. Our equational logic should look very familiar:
where and ,
is a !-graph rewrite (where, in particular, all morphisms reflect !-box containment), and
are wire-homeomorphic.
To these, we add inference rules for instantiations of !-graph equations:
Since our rewriting process implements AXIOM and LEIBNIZ up to wire homeomorphism and up to instantiation of -graph rewrite rules,the above logic is equivalent to the relation .
For a set of -graph equations ,let be the set of concrete instances of the equations of . Then the semantics of we use is that,for any concrete instance of ,
in the equational logic of string graphs. Soundness of the above logic with respect to these semantics follows from corollary 5.4.6 and the fact that string graph rewriting implements the equational logic for string graphs.
6.2 Regular Forms of Instantiations
A lot of the proofs in this chapter rely on reasoning about instantiations. However, multiple instantiations can produce the same instance of a !-graph, making them harder than necessary to reason about. In particular, copying a !-box and then killing one of the copies results in the original graph (up to isomorphism).
In this section, we will demonstrate that it is possible to, if not completely remove these re-dunancies from the set of allowed instantiations, at least require a certain structure that is easy to reason about without affecting the set of possible instances of a !-graph.
We will concentrate on instantiations of -graphs,but the results extend to -graph equations.
6.2.1 Depth-Ordered Form
Definition 6.2.1 (Depth). Let be a !-vertex of a !-graph,and the set of parent !-vertices of . Then the depth of is defined by
The depth of a !-box operation is the depth of the !-vertex it operates on.
Another way of viewing the depth of a !-vertex is that it is the longest ancestor-path to a top-level !-vertex.
We can now define a depth-ordered form for instantiations.
Definition 6.2.2 (Depth-ordered form). A instantiation is depth-ordered if no !-box operation of depth is preceded by an operation of depth greater than .
So a depth-ordered instantiation has some number of !-box operations of depth 0 , followed by some number of depth 1 , then of depth 2 and so on.
We will demonstrate that any instantiation has a depth-ordered form that is equivalent in the sense that the resulting graphs are isomorphic. To do this, we will present an algorithm that converts any instantiation into a depth-ordered one, where each change that the algorithm makes preserves the resulting graph.
We will need some results about what changes we can make, but first we need a way to talk coherently about those changes. For example, most of the changes we make will be commuting two operations in an instantiation.
An instantiation is a sequence of operations that progressively transforms a graph. The operations in that sequence cannot be divorced from the graphs they operate on. Consider an intermediate stage of a hypothetical instantiation; the operations up to this point have produced the graph
Suppose that the next operation in the sequence is . Then the next intermediate graph will be
The depth of in ,and hence of ,is 1 . If the next operation is ,it will have depth 0 , so the sequence cannot be depth-ordered.
What we would like to do is exchange the order of and ,while still reaching the same graph :
However, is a vertex of ,and does not actually exist in is derived from a !-vertex of , though; which !-vertex is obvious from the pictures of the graphs.
More generally,we can construct a morphism from to that maps each vertex or edge to the one it was derived from.
Definition 6.2.3 (Origin Map). Let be a !-graph, and . Then ,as defined by the following pushout:(6.1)
is called the origin map for .
The origin maps for and are the natural inclusion morphisms into (which exist because these operations are graph subtractions).
Armed with the origin map ,we can apply to ,obtaining another graph :
Now we need to find the !-vertex of that corresponds to . In this case,since the origin map of a DROP operation is injective and is in its image,it has a unique preimage in
and we can apply to to get a graph
which is clearly isomorphic to ,and the origin maps for both sets of sequences are the same. and are an example of commutable operations.
Definition 6.2.4 (Commutable operations). We say a sequence of two operations (with origin maps and ,respectively) is commutable when is the only vertex that maps to under and has a unique preimage under ,the origin map of .
Note that whether two operations are commutable depends on the !-vertices they operate on. Of course, if we say that the operations are commutable, we expect them to have the same result in either order.
Proposition 6.2.5. Let be a -graph, with origin map and with origin map . If and are commutable with commuted form (with origin maps and respectively) resulting in ,then ,and this commutes and .
Proof. If both operations are DROP or KILL, we just note that the net effect is to remove the union of the subgraphs,regardless of the order. For example,if is DROP and is KILL,
If is COPY,the conditions on commutable operations mean that . Suppose is DROP. Consider the effect of on :
Since must be the same as ,ie: ,where is the inclusion of into .
A similar argument works when is KILL. We have
and,because every vertex in in must be in in all copies of it is contained in, must again be the same as .
These arguments also work when is COPY and is DROP or KILL. That just leaves the case where both operations are COPY.
In their original order, we have two pushouts:
Note that the requirements for commutability of operations mean that is in the image of both and ,and is likewise in the image of both and . We construct the isomorphism from to by initially constructing a bijective map from the vertices of to the vertices of .
Let be a vertex of and . We then choose to be if is in the image of and if is in the image of . Likewise, if is in the image of and if is in the image of . We then set .
First,we need to check that we have defined a coherent function. If is in the image of both and cannot be in . Then is not in ,since if it were,whichever of and maps to must also map the edge from to to an edge from to . Thus . Likewise,if is in the image of both and cannot be in and . So is a valid function.
Note that the definition of an origin map means that and . Thus the same construction works in reverse,allowing us to construct the inverse map ,and so is a bijection.
Since !-graphs are simple,we just need to show that,for any two vertices and of ,there is an edge from to if and only if there is one from to . We construct a family of vertices in the same way we constructed the family.
If there is an edge from to ,this edge is mapped to an edge from to by . The existence of means that and must both be in the image of or both be in the image of . Thus and must both be in the image of or both be in the image of ,and that morphism must map to an edge from to . Similarly,either or must map to an edge from to .
The converse case is symmetric,and hence extends to an isomorphism .
We have generalised commutable operations beyond just being on entirely separate parts of the graph, like in the example above, but we still have to deal with non-commutable operations. If the operations are already depth-ordered,such as if the next operation were DROP
we can safely ignore them.
Instead, we will look at operations where the preimage of the !-vertex the second operation acts on is a parent of the !-vertex the first acts on. For example,suppose the next operation on were . Then the operations would not be depth ordered did, while has depth 1),but they cannot be commuted; once again, is not in the image of the origin map of .
In this case,we can simply discard the operation,since is clearly isomorphic to . The same would be true if the first operation were or .
Lemma 6.2.6. Let be a -graph, with origin map and with origin map ,where . Then is isomorphic to (with origin map ),and this isomorphism commutes and .
Proof. If OP is DROP or KILL,this has the same net effect,since and hence . If it is COPY,it is sufficient to note that
and hence the COPY operation has no effect on the part of the graph preserved by the KILL operation.
If the next operation is will have two preimages under the origin map of , which we will call and . Then the replacement sequence will be
Lemma 6.2.7. Let be a -graph, with origin map and with origin map ,where .
Let ,with origin map . Then has two preimages under and . Let ,with origin map has a unique preimage under ,which we call . Let ,with origin map . Then is isomorphic to ,and this isomorphism commutes and .
Proof. Intuitively,whatever we did to before the COPY operation,we do instead to both copies of after it.
If OP is DROP,we remove both and ,and this is the same as removing just before applying the COPY. If OP is KILL,the preimage of under is exactly the union of and ,and hence the net effect is likewise maintained. We note that ,and then the COPY case follows from a similar proof to that of two commutable COPY operations (in proposition 6.2.5).
We have not dealt with the second operation being ,because this commutes with , and indeed DROP commutes with any preceding operation on a distinct !-vertex, including child !-vertices.
When we are dealing with rewrite sequences longer than two operations, explicitly referring to the origin maps will become extremely cumbersome. Therefore,if we have an origin map and a !-vertex of with a unique preimage under ,we will also refer to this preimage as . If it has two preimages under ,we will refer to one as and the other as .
We can now set out algorithm 1 which takes an arbitrary instantiation of a !-graph and gives a depth-ordered one that produces the same graph.
Algorithm 1 Order !-box operations by depth
while has unmarked operations do
rightmost unmarked operation
!-box op operates on
look at ,immediately to right of ,operating on
if there is no such then case 1
mark
else if then case 2
mark
else if is not a parent of then case 3
commute and
else if is KILL then case 4
discard op
else if is DROP then case 5
commute and
else if is COPY then case 6
remove and add two copies of to the right of ,acting on the two copies of
end if
end while
The preceding discussion gives rise to the following proposition, stating that the algorithm does not alter the effect of the instantiation:
Proposition 6.2.8. Applying algorithm 1 to an instantiation that results in a graph yields (if it terminates) an instantiation that results in a graph isomorphic to .
The main invariants of the algorithm are that the marked operations are depth-ordered relative to each other,and that no unmarked operation of depth is ever to the right of a marked operation of depth . This can be seen by noting that op is always unmarked and everything to the right of it,including ,is marked. The first part of the invariant is preserved because the algorithm never alters the order of the marked operations, and an operation is only marked when the marked operations to its right have the same or greater depth (and the second part of the invariant requires that the marked operations to the left have the same or lesser depth). The second part of the invariant is maintained because we only ever move op to the right of when and while this can reduce the depth of ,it can only reduce it to .
Proof. We will place a (tight) upper bound on the number of iterations, as well as on the length of the final instantiation. Specifically, given the recurrence relations
we will show that if the input instantiation is of length ,the output instantiation is no longer than operations and the algorithm takes no more than iterations. We will do this by induction on . The case is trivial.
The main thing to note for the step case is that if we have the instantiation ,because the algorithm always deals with the rightmost unmarked operation (and the operation to its right), it will not consider until it is the last remaining unmarked operation. This means that the first part of a run of the algorithm on is the same as a run of the algorithm on .
So suppose the length of is . By the inductive hypothesis,after at most iterations, the instantiation will look like ,where consists of at most marked (and no unmarked) operations,and remains unmarked.
Let be the actual length of . We will show that after at most iterations that hit cases 3-6, the instantiation will be depth-ordered. Since only case 6 iterations can introduce new operations,and only one at that,there can be at most case 1 or 2 iterations before the entire sequence is marked. So there are at most operations in the final sequence,and at most
iterations in total.
We need to note here that any operations in the sequence, marked or unmarked, at any subsequent point that are not part of must be or a copy of . If two copies of are adjacent,they must have the same depth. This is because the depth of a copy of can only be changed by being commuted with a . But any other copy of adjacent to it must have been commuted with the same ,and its depth will have been changed in the same way. It follows that cases 3-6 can only be triggered if is a member of .
Suppose we are at some iteration of the algorithm (at or after the point where we have ). Consider ,the rightmost unmarked operation in the sequence. Call the subsequence to the left of op ,and the subsequence to its right ,so the whole sequence is . Let be the number of operations of that are in .
We will show by induction on that,in no more than case 3-6 iterations (and some number of case 1 or 2 iterations),the instantiation will be followed by marked operations.
For ,either is empty,in which case we trigger case 1 and are done,or consists only of operations derived from ,which must therefore have the same depth as op. This means we trigger case 2, and again are done.
Suppose . If cases 1,2 or 4 are triggered,we are done. Cases 3 and 5 require the inductive hypothesis, but are straightforward. So suppose we hit case 6. We now get two copies of ,each of which has operations from to the right of it. We apply the inductive hypothesis to the right hand one to get that in at most case 3-6 iterations,the left hand one will be the rightmost unmarked operation. As already discussed,this will still have operations from to the right of it,so after no more than further case 3-6 iterations,we will have the sequence we want ( followed by only marked operations). Then we see that
which is what we wanted.
Corollary 6.2.10. Any instance of a !-graph has a depth-ordered instantiation.
The upper bound for this algorithm may appear worryingly high, but it is not an algorithm that would ever be implemented. The fact that such an algorithm exists, however, allows us to assume that any instantiation is depth-ordered; if it is not, we can produce an equivalent one that is. In fact, in chapter 7 we will demonstrate matching techniques that directly produce depth-ordered instantiations.
6.2.2 Expansion-Normal Form
A depth-ordered concrete instantiation has only operations of depth 0 . To see this, note that any operation of depth greater than 0 must be on a !-box that has one or more parents, and at least one of those parents must be of depth 0 (by posetality of the subgraph of !-vertices). However, since the instantiation is depth-ordered,there can be no further operations on that -box,and so it must exist in the final graph, which cannot then be a concrete graph. So the instantiation cannot be concrete.
We will introduce as a shorthand for . As a notational convenience,we will consider to be the same as ,since it is the unique preimage of under the composition of the origin maps.
Definition 6.2.11 (Expansion-normal form). A concrete instantiation is in expansion-normal form when it is composed entirely of EXP and KILL operations, and the depth of every operation is 0 . Example 6.2.12. A sequence in expansion normal form:
Every operation is on a top-level !-box (depth 0), and the only operations are EXP and KILL.
We generally identify the "constant" parts of the graph for each operation (in a manner consistent with the origin maps), and view EXP operations as adding to the graph and KILL operations as removing part of the graph.
Theorem 6.2.13. Any concrete instance of a 1-graph has an instantiation in expansion-normal form.
Proof. We know from corollary 6.2.10 that there must be a depth-ordered instantiation of from ; we will transform this into an equivalent instantiation (ie: resulting in the same graph,up to isomorphism) that is in expansion-normal form.
In order to do this,we will need to be able to,for a given operation,move any operation on or to immediately after . We will demonstrate this for arbitrary depth-ordered instantiations, not just concrete ones.
Consider an operation (recalling that and are arbitrary labels,and hence inter-changable). The depth of must be the same as the depth of . The only way the depth of could deviate from that of is if there were a DROP operation between them on a parent of . But the depth of this DROP operation would have to be strictly less than the depth of ,which contradicts the depth-ordering. Indeed,there can be no operations on any parent of after .
Similarly,there can be no operations on any child of between and ,since that operation would have a depth strictly greater than that of ,and hence of ,which violates depth-ordering. Therefore,proposition allows to be freely exchanged with the operation preceeding it until it reaches ,and this will maintain the depth-ordering of the sequence.
Note that this also allows us to move back,since we can move back to , and then move back to . Likewise,given ,we can move any subsequent back to join it,so we can move all occurrences of and to immediately follow .
Starting with a depth-ordered concrete instantiation, we can eliminate COPY (in favour of EXP) with the following procudure (note that we treat EXP as an opaque operation in its own right, not a composite of COPY and DROP), and so obtain a concrete instantiation in expansion-minimal form.
We eliminate COPY operations starting from the right. At each stage, we consider the rightmost COPY operation, . If the next operations on and are both EXP,we first move all the and operations to immediately after . We then replace all the operations by ,which is an equivalent operation in this context.
This done,we know that,after ,the first operation on must be either or . In both cases,we move it back so that it immediately follows . In the case of ,we can just replace both operations with (and replace all later references to with ). Since was depth must also be depth 0 . In the case of ,we can eliminate both operations (and,again,replace all later references to with ). Then we will have eliminated this occurrence of COPY, and we can proceed onto the next rightmost occurrence, until none remain.
Once we have eliminated all occurrences of COPY, we can eliminate DROP operations by replacing each with ,which both have depth 0 since does. is ,and this is equivalent to ,which is equivalent to .
So we now have a concrete instantiation containing only EXP and KILL operations, and all the operations are of depth 0 , so it is in expansion-normal form.
Once we have an instantiation in expansion-normal form,if is a top-level -vertex in the starting graph ,we can pull all the operations on to the start of the instantiation. So the instantiation will start with some number of operations,followed by (the argument for this is identical to the one that allowed us to move operations on back to ).
6.3 !-box Introduction
We will demonstrate a simple inference rule that allows us to wrap an entire !-graph equation in a !-box.
Definition 6.3.1 (BOX). Suppose is a 1-graph. Then is the -typed graph consisting of together with a fresh -vertex and an edge from to every vertex in the graph (including itself).
Example 6.3.2. If we consider the graph : then is
We will sometimes write to specify a name for the fresh !-vertex; in contrast to the other -box operations, is not in but is in .
is easily seen to be a !-graph: and hence is a string graph; extending a poset with a fresh element that is every element of the poset and to itself yields another poset; for every and ,so these are all trivially open subgraphs of ; and follows from the fact this is true for and .
As with the other operations, we can apply BOX to monomorphisms that reflect !-box containment:
Proposition 6.3.3. Let be a monomorphism in that reflects -box containment. Then can be uniquely extended to a !-box-reflecting monomorphism that commutes with the inclusions of into and into .
Further,if is another such morphism, .
Proof. If we have and ,the only possible extension of is the one that maps to ,and maps each edge between and some vertex in to the edge from to in . This then trivially reflects -box containment and is monic,since is. It also trivially respects composition.
Applying BOX to every morphism in definition 5.1.2 gives us the following:
Corollary 6.3.4. If is a !-graph equation,then so is
We can now state the inference rule
Theorem 6.3.5. Box is sound. In particular,if for all concrete instances of ,
then for any concrete instance of ,
Proof. By theorem 6.2.13,there is an instantiation of from in expansion-normal form,and we can further require that decomposes into where is a sequence of operations,followed by .
Applying to produces the disjoint union of and . The next adds another copy of ,and so on. Finally, removes the copy of ,leaving zero or more disjoint copies of . So applying to will produce ,where is disjoint copies of and is disjoint copies of .
Now if we apply to ,we know that each operation can only affect a single copy of in and the corresponding copy of in . There are thus instantiations of such that is and similarly for (where we are using the notation from remark 4.4.2).
We can then use the assumption to construct proofs of for each ,and use LEIBNIZ to apply these to the decomposed parts of .
This is mostly useful in the construction of larger proofs. We will demonstrate its use in section
6.4 !-box Induction
In this section, we introduce an analogue of induction for !-graphs. This will follow the usual induction scheme of having a base case and a step case that must be proved for a particular -vertex . However,in the absence of the named variables of terms,we need some way to link a !-vertex in the inductive hypothesis to one in the equation we need to prove for the step case.
We will introduce the possibility of fixing a !-vertex, which prevents any !-box operations being performed on it during the matching process. The idea is that if we take the following graph
and fix ,it should match itself and
but not
or
as it normally would. The result of this is that, given a rule like where is fixed,the only possible rewrites are ones like
where the !-box operations that can subsequently be performed on and (and hence the possible concrete instantiations) are linked.
In order to deal with multiple applications of the induction rule, we will tag these fixed !-vertices with arbitrary symbols drawn from a (countable) set . So we could tag above with ; in this case,we say that is -fixed,and we only allow to match in
if is also -fixed.
To store this information,we augment a !-graph with a partial map
that maps each fixed -vertex to its tag. We require -graph morphisms to preserve these tags,and the morphisms of a !-graph equation to also reflect them. In particular, if we have the !-graph equation ,the fact we can consider the !-vertices of and to be the same means we can view as being the same as ,and call these .
This will have an impact on the !-graph logic, and the notion of soundness we are considering. Specifically,we will consider what we mean for to hold. Let . For each ,each non-negative integer ,and each -graph equation ,we define to be with operations and one operation applied to it for each -fixed !-vertex in the equation. Then for each function from to the non-negative integers,we define to be with applied to it for each (the order is irrelevant,as these are all top-level -vertices). Then the semantics of is that for each and each instance of ,if we let
then
in the equational logic of string graphs.
We need to adjust the !-graph logic and rewriting to keep it sound under these modified semantics. We place a side constraint on COPY, DROP and KILL that the !-vertex they operate on cannot be fixed (hence the term "fixed"), and on Box that the equation contains no fixed !-vertices. Likewise, when rewriting, we do not allow any !-box operations to be applied to fixed !-vertices.
Our previous requirement that morphisms preserve fixing tags means that the matching morphisms of a !-graph rewrite can only map -fixed !-vertices to other -fixed !-vertices (although an unfixed !-vertex may match an -fixed one). This also constrains the LEIBNIZ rule.
We also introduce a !-box operation that produces a !-graph identical to but where the -vertex is -fixed (and similarly for -graph equations),and a rule
with the condition that is a top-level -vertex of that is not fixed.
In the pictoral representations of -graphs,we will mark -fixed -vertices with the label .
The following is the inference rule for -box induction,based on one originally suggested as a possibility by Kissinger in his thesis ([23],pp 179-181). It holds providing is a top-level,unfixed !-vertex in and is a fresh tag (that does not already occur in or any of the equations in ).
Theorem 6.4.1. INDUCT is sound.
Proof. Let . We need to show that for each and each concrete instance of
in the equational theory of string graphs.
We can choose an instantiation of in expansion-normal form with all the operations on at the start (since is top-level). Denote this ,where contains all the operations on ,and all the other operations. is comprised of expansions,for some ,followed by .
We show by induction on that,for all concrete instances of with an instantiation of the form ,
is just . It follows from the definition of that there is an instantiation of from such that only operates on top-level -vertices of . Then is equivalent to ,and so is an instance of . The assumption then gives us that
: let be with the first expansion removed (so ). We know that is equivalent to ,and so is an instantiation of from . But
since all the operations are on top-level -vertices of . Further,we can define
and then
So now we have that is an instantiation of from ,and then the step case assumption means we can construct a proof of
Now the way was defined means that
and so the proof may contain invocations of AXIOM with a concrete instance of . Consider one such invocation,where the axiom is a (concrete) instance of , which is the same as . Then is an instance of and there is an instantiation of the form witnessing this. The inductive hypothesis then allows us to construct a proof of
We can thus replace all such AxIOM invocations with proof trees using only the axioms of ,and so discard from ,leaving us with a proof of
as required, and the induction is complete.
Now we just let ,and we have
as required.
This can be viewed as a sort of -box introduction rule,more powerful than Box (although relying on correspondingly stronger assumptions). It can be used to produce !-versions of concrete equations, such as those generated by an automated tool like QuantoCosy 23.
Example 6.4.2. Suppose we have the following three graph equations,drawn from the calculus presented in section 2.2(6.2) (6.3) (6.4)
The first two are easily found by QuantoCosy, and the latter is the spider theorem arising from commutative Frobenius algebras. We wish to use !-box induction to show(6.5)
The base case is just (6.2), and so is trivially satisfied. Now we need to show that adding
to the other equations allows us to derive
This follows using !-graph rewriting:
6.4.1 The Generalised Bialgebra Law
We now demonstrate a more ambitious application of -box induction,where we derive the generalised bialgebra law for the calculus
Recall the axioms of the calculus:
We have omitted the commutativity laws, as these are inherent in the fact that all the nodes are variable-arity,and hence the inputs (resp. outputs) of a or node are indistinguishable from each other. In particular,
is actually the same string graph equation as DUAL.
We will start by proving some symmetries of these equations. Note that, for brevity, we will tend to condense multiple applications of the same rewrite rule (or of rules that differ only by colour).
Lemma 6.4.3.
Proof.
Lemma 6.4.4.(6.7)
We have already derived(6.8)
and(6.9)
follows from (6.7) by essentially the same proof.
Lemma 6.4.5.
Proof. Proof by INDUCT on . The base case is and the inductive step is(6.11) (6.12)and on XSP to get(6.13)
Lemma 6.4.6.
Proof. Proof by INDUCT on . The base case is
Theorem 6.4.7.
Proof. First we use Box on ZSP to get
We will need to use INDUCT twice. First,we use it on . The base case is The inductive hypothesis is(6.14)
and we need to show
(6.15)
We apply INDUCT again,this time on . The base case is
The new inductive hypothesis is
(6.16)and we need to show
Thus we have demonstrated that (6.17) holds, so INDUCT gives us that (6.15) holds, and INDUCT again gives us that the theorem holds.
This example, while a demonstration of the power of INDUCT, still requires the rewrite system to initially contain the spider laws. Appendix B demonstrates a possible way of using !-box induction to introduce !-boxes into rewrite systems that previously contained only concrete rewrite rules.
6.5 Merging !-boxes
The MERGE operation presented in this section was included as one of the core !-box operations in
[26] (pp 6-7,10-11). Theorem 6.5.5 demonstrates that this is unnecessary.
Suppose we have the equation
and the graph
It is easy to see that for any concrete instance of the graph, there will be a concrete instance of the equation whose LHS matches the graph: we just need to do the same things to both and that we do to . However,the LHS of the equation as it stands will not match the graph.
In order to allow this, we will introduce a new !-box operation that will merge two disjoint !-boxes in a !-graph. The two !-boxes will need to have the same parents in order to ensure that combining them will not affect the contents of any other -box in the graph.
Definition 6.5.1 (MERGE; [26],pp 7). Suppose is a !-graph and ,with and ,then is a quotient of where and are identified. More explicitly, this is the coequaliser
in where is the normal inclusion map and is the inclusion of into composed with the obvious isomorphism from to .
This construction identifies and ,and leaves all the other vertices of untouched. The preconditions ensure that every incoming edge of corresponds exactly to an incoming edge of (and vice versa); each such edge to is identified with its corresponding edge to . If a pair of !-vertices satisfy the preconditions of MERGE, we say they are mergable.
Proposition 6.5.2. Let be a -graph with mergable -vertices and . Then is a !-graph and .
Proof. Let ,and let be the coequaliser map.
exists in ,as has coequalisers of monomorphisms. Note that is surjective by lemma 4.2.7 since it is a regular,and hence strong,epimorphism of .
We can construct a morphism that is the identity on (and collapses all the other !-boxes),and this trivially coequalises and . So there is a unique making
commute. Since is an isomorphism, is a (split) monomorphism as well as being a strong epimorphism,and hence is invertible. So and hence must be a string graph, since is.
Let be vertices in such that . We can show that either or . Suppose ,let be together with the graph of -vertices
and edges from and to each vertex of . Then let be the identity on and take every vertex in to except ,which it takes to . This also fixes the edge maps. If and will coequalise and ,and so there must be a unique map such that . But then even though ,which is impossible. So we must have or . Repeating with the roles of and switched,we get that or . Thus if ,then .
is reflexive as it is the image of ,the domain of which is reflexive.
Let be vertices in with edges between them such that and
is anti-symmetric,and so the preimages of and in under cannot be in the same loop construction. Thus either or must be mapped to by more than one vertex of ,and hence must have the preimage . This cannot be true of both,since . WLOG,suppose it is true of . Then ,the preimage of ,must be in and (or vice versa). But and (since ) must be in ,which can only be true if ,which we assumed was not the case. So there can be no such and .
Simplicity of follows a similar argument: if and are edges from to where , then if the preimage of is must be in the image of and and so the preimages of and must be coequalised by and hence ,and if the preimage of is then the fact that means the preimages of and must be the same,and hence .
For transitivity,let be vertices in with an edge from to and from to . Let be arbitrary preimages of the respective edges under ,as before. Then either ,in which case the existence of an edge from to is immediate from transitivity of ,or . WLOG,assume and . Then gives us that must be a predecessor of ,and so must be a predecessor of ,as required.
Let . Then,because is surjective and is bijective, is the union of for each preimage of under . But is an isomorphism,so is the union of open subgraphs of ,and hence is open in .
For the final requirement of a !-graph,suppose with . We need to show,for any vertex of with ,that . If is a !-vertex,this follows from transitivity of . Otherwise,the edge from to has a preimage under ,as does the edge from to . If ,there must be an edge from to in ,which will map to an edge from to in . Otherwise,we must have and (or vice versa,which is equivalent). But then the preconditions for MERGE require an edge from to as well,which also induces an edge from to ,and hence from to .
Proposition 6.5.3. Let be a morphism of that reflects -box containment, and let and be mergable -vertices of such that and are mergable in . Then there is a unique morphism
making the following diagram commute
where the down arrows are the coequaliser maps from the definition of MERGE, and this morphism reflects -box containment. Further,this construction preserves monomorphisms and strong epimorphisms and respects composition.
Proof. Let and . We will construct the following commuting diagram
where will be uniquely determined by universality of coequalisers. This will then be .
The top square is two separately-commuting squares: and . If is a vertex in ,then the edge witnessing this is mapped by to one witnessing . So if we let be ,the restriction of to ,reinterpreted with codomain then we have the first commuting square. The second follows from the fact that and similarly for and .
Now coequalises and ,and hence coequalises and . So coequalises and . Then universality of means that there is a unique making the bottom square of the diagram commute. This universality also means that MERGE respects composition.
The fact that reflects -box containment means that is surjective,and hence epic. Then if we have morphisms and such that ,we know
Then,since equalises and ,there is a unique arrow such that . Since is epic,we also have ,and hence the bottom square of the diagram is a pushout. preserves monomorphisms under pushouts,and hence is monic whenever is. Also,if is strongly epic,and hence surjective,the same must be true of and hence of . So is surjective, and hence strongly epic. Since it is surjective, it trivially reflects !-box containment.
Since for -graphs, is the same as (up to isomorphism) and so we can simply apply MERGE to the morphisms of definition 5.1.2 to get the following:
Corollary 6.5.4 ([26],pp 10-11). Let be a !-graph equation,and let be mergable !-vertices of . Then
is also a !-graph equation.
In the case that and are mergable -vertices of ,we posit the following rule:
Theorem 6.5.5. MerGE is sound.
Proof. We will show that every concrete instance of is also a concrete instance of . Suppose is a concrete instantiation of from in expansion-normal form.
While instantiations may only contain COPY, DROP or KILL operations (EXP being a composite of these), for the purposes of this argument we will relax this to allow MERGE operations as well. This is purely a convenience to allow us to call our intermediate steps "instantiations" before we finally arrive at a genuine instantiation containing only those three operations. So we start with the "instantiation" of from ,and transform it into a genuine instantiation with no MERGE operations.
The intermediate instantiations may have multiple MERGE operations, but we will always keep these together,and they will not interfere with each other: if and are two adjacent operations in the instantiation,where are the coequaliser maps,we will have that and ,and similarly will not interfere with the !-vertices acted on by any MERGE operation on and so on. This allows us to freely reorder the MERGE operations.
We will use this to collate them into a "composite" MERGE that simultaneously operates on multiple pairs of mergable -vertices,where each pair has distinct parents to any other pair. We will only ever have one such operation in the sequence; initially, it will be at the start and will have a single pair, . When it has no pairs,there are no MERGE operations left in the sequence and we are done.
We will move the MERGE right one operation at a time until the set of pairs it operates on is empty. So consider the operation to its right. This is either or for some -vertex cannot be a child -vertex of any of the merged vertices,since must be a top-level -vertex. If none of the merged vertices are in ,the operations are independent,and we can move the MERGE one place to the right without changing the resulting graph.
Suppose the operation is . If is a merged vertex,call its preimages under the co-equaliser and . If we remove the ,remove the pair from the MERGE and put before the MERGE,we will get the same graph. Otherwise,suppose is not a merged vertex,but contains a merged vertex. Then we can swap with the MERGE and remove any pairs in from the MERGE.
Suppose instead the operation is . If is a merged vertex,we can remove and put before the MERGE,leaving the set of merged vertices untouched. Otherwise, suppose is not a merged vertex but contains one,and say it results from the pair . Then we can swap and the MERGE,providing we add to the set of merged vertices,where is the copy of under the expansion and is the copy of and have no edges to or from and ,or the !-vertices of any other merged pair. If there are any other such pairs,do the same to them as well.
Since the number of operations to the right of the MERGE always decreases, this will eventually terminate,and we will have a concrete of instantiation of from .
6.6 A More Traditional Logic
Those familiar with formal logics will have noticed something strange about the one presented at the start of this chapter,as well as the one in section 3.5 the equality predicate is not really a predicate at all. What we have presented is a logic of "string graph equations" and "!-graph equations", where those "equations" are actually spans in a category.
In this section, we will sketch out an equational logic of !-graphs, where the equality predicate is just a formal symbol indicating that the objects on either side of it should be considered equal. As we noted in section 3.4.1, we need to somehow encode the correlation between boundaries of string graphs; we also need to encode the correlation between !-vertices in both graphs. !-graph equations are one way of doing this, but to get a more traditional equality predicate, we will extend the idea of a framed cospan (definition 3.4.1) to -vertices.
Definition 6.6.1 (!-Framed Cospan). A !-graph frame is a triple where is a !-graph consisting only of isolated wire-vertices,!-vertices and edges whose sources are !-vertices; is a total order on ,the vertices of ; and is the signing map.
A cospan of -graph monomorphisms is called a -framed cospan if
and are !-graph frames
contains no isolated wire-vertices
the following is a pushout square:
for every
for every
As with the framed cospans of string graphs, we can frame a !-graph equation:
Definition 6.6.2. A framing of a !-graph equation is a pair of !-framed cospans and such that there are morphisms and forming the coproduct (ie: disjoint union) and making the following diagram commute:
Just as with framings of !-graph equation, such a framing is equivalent to a !-graph equation, in the sense that any !-graph equation can be framed, and the !-graph equation can be recovered from the two !-framed cospans. If two !-framed cospans have isomorphic left and right (or domain and codomain) !-graph frames, they induce a !-graph equation, and we call them compatible.
With this is mind, we can construct a logic of !-framed cospans that is equivalent to the logic presented in this chapter. We have a choice in how we represent the set of axioms: we can use a set of !-graph equations and the rule
where is a framing of ,or we can use a set of pairs of compatible !-framed cospans and have the rule
The equivalence relation rules are simple:
(Reft) (Sym) (Trans)
For the others, we will need to introduce some new notation, or at least extend existing notation. For example,if is a wire homeomorphism of !-graphs,and is a framed cospan,then there is a framed cospan where the inclusions of the framed cospans commute with and . We can then view as a wire homeomorphism from to .
HOMEO can then be expressed as
where is a wire homeomorphism (note that the presence of SYM means that this is equivalent to the version of HOMEO on !-graph equations).
LEIBNIZ requires a bit more work. We will borrow the notion of contexts from term-rewriting, where they are terms with "holes" in them: placeholders that can be substituted with other terms to produce a valid term overall.
The idea with !-graphs will be to provide both an "external" interface to the graph, as a !-framed cospan does, and an "internal" interface. An simple example (without any !-boxes) would be
We could then take a !-framed cospan whose frames match the internal interface of the context, such as
and embed it in the context by merging the images of the frames producing the !-framed cospan
When there are !-vertices to account for, the !-graph frames of the external interface must contain all the !-vertices (and edges between them) in order for the final construction to be a !-framed cospan. The internal interface need not contain all the !-vertices of the graph, though.
Definition 6.6.3. A -graph context is a pair of -graph cospans
where
contains no isolated wire-vertices
and are !-graph frames
the morphisms are all monic and reflect !-box containment
Bound is the disjoint union of the images of and
for every ,and similarly for
for every ,and similarly for
Given such a !-graph context and another !-framed cospan
we can merge the context with (we say that accepts ). We first merge and ,using as the overlap, in a manner similar to the composition of framed cospans from section 3.4.1
is a !-graph by a similar argument to the one that shows composition of framed cospans to be well-defined (note the reversal of the frames for means that the interpretation of the signing map on the frames is reversed,so inputs of will be merged with outputs of and vice versa).
Next,we merge the two images of using the following coequaliser:
Then we define to be the framed cospan
Now we can write LEIBNIZ as
where is a !-graph context that accepts and (note that the compatibility of and means that if accepts one then it must accept the other).
In this version of LEIBNIZ,the context graph takes the place of in the !-graph rewrite
and the internal frames and take the place of .
The final piece of this logic of -graph frames is the -box operations. We extend them to -graph cospans in the same way that we extended them to !-graph equations: if
then
Now we can write, for example,
(since the compatibility of and means that we consider to be in both if it is in either),and similarly for the other !-box operation rules.
Thus we have a logic with a normal equality predicate based on !-framed cospans. What is more, by considering only concrete graphs, the same construction can be used to produce a logic based on framed cospans from the logic presented in section 3.5 .
In practice, a theorem prover would probably use names to store the equivalent of the !-graph frames; the signing maps and the division between the domain and codomain are not particularly important for the construction of proofs (as can be seen from the fact that the !-graph equation structures do not store this information).
Chapter 7
Computability of -Graph Matching
Suppose we have a !-graph (or string graph) and a !-graph rewrite rule that we wish to use to rewrite ; as previously stated,we want rewriting to be up to wire homeomorphism. If we can find a rewrite rule that is wire-homeomorphic to an instance of such that there is a local isomorphism that reflects -box containment,theorem 5.3.3 tells us that the rewrite(7.1)
exists. In fact,it is easy to find this rewrite: if we remove from the image of everything in the interior of (ie: the image of everything in that is not in the image of ),we get is the subgraph relation and is a restriction of ,being a pushout of monomorphisms,is just a graph union. In a practical implementation, where vertices and edges are likely to be named components, the main difficulty is managing the names to make sure there are no clashes when building .
That still leaves finding suitable instances, finding suitable rules wire homeomorphic to those instances and finding a local isomorphism that reflects !-box containment. Section 3.6.2 discussed the latter two (except reflecting !-box containment, which can be implemented by filtering the results); in this chapter,we will demonstrate that,providing is "well behaved" in terms of the instances it can produce, we can finitely enumerate all instances of the rule that can result in a matching onto a particular . We will also describe how Quantomatic implements these steps.
7.1 Enumerating Instances
In order to find all possible matchings,we need a way of searching the space of graphs such that ; while this space will almost always be infinite,it will usually be the case that only a finite number of them will be candidates for matchings.
Suppose we wish to match a !-graph onto another !-graph ; we need to find an instance of and a monomorphism from to (or,rather,from a graph wire homeomorphic to to a graph wire homeomorphic to ). The monomorphism must satisfy additional constraints (definition 5.3.2), but these are not important to this discussion.
As long as has any (unfixed) !-vertices,it will have an infinite number of instances (to see this, consider repeatedly applying COPY to one of the !-vertices). However, in most cases, only a finite number of these will have a monomorphism to .
In particular,we can safely ignore any instance of that has more node-vertices,circles or fixed -vertices than ,as the number of each of these is constant under wire homeomorphism. We cannot, however, restrict the search based on the number of wire-vertices. Consider, for example, the following pattern and target graphs:
No matter how many times the !-box in the pattern is expanded, we can always introduce enough wire-vertices in the target graph's wire to accommodate them all. So there are an infinite number of matchings in this case. Empty !-boxes do not even need a wire in the target graph to exhibit this behaviour.
We call these (unfixed) !-boxes, containing no node-vertices or circles, wild !-boxes. In order for it to be possible to enumerate all candidate instances of ,we will require that no instance of contains any wild -boxes. To show this for any given ,it is sufficient to show that for each , is not wild after killing all -vertices not in . The rest of this discussion will assume no instance of contains wild !-boxes.
The first step is to consider how to explore the space of instances in an ordered way. There may be an infinite number of witnesses of even a single instance,since applying to a graph with a !-vertex yields a graph isomorphic to ,and so we cannot simply try to build every possible instantiation. At the same time, we cannot depend on expansion-normal form as we need to deal with arbitrary instances, not just concrete ones.
That said, we can produce something similar enough to expansion-normal form for our purposes. We will use the FIX operation used in section 6.4, and we will adjust our definition of depth to ignore !-vertices that have been fixed (so a !-vertex with only fixed parents has depth 0). Since we are really using this as a notational convenience, we will relax the constraints on FIX to allow the fixing of -vertices whose parents are all fixed. The important thing to note is that,in the context of instantiations,FIX behaves just like DROP: after ,there can be no further operations on , and it can potentially reduce the depth of any children of .
If we take an instantiation of from and a fresh fixing tag ,and append a operation for every ,then the result,call it ,is enough like a concrete instantiation for the arguments from section 6.2 to hold with minimal adjustment (just read "FIX" or DROP" wherever DROP is mentioned). We also need an equivalent of ; we will call this ,which will be a shorthand for .
So we can transform into a witness for that is depth-ordered (according to our revised definition of depth) and is composed entirely of EXP,KILL and operations. As a result,we only need to search for instantiations of this form in order to find all possible instances.
Remark 7.1.1. Using this trick comes with a caveat: when we attempt to find a morphism from to ,an -fixed -vertex would ordinarily only be allowed to match another -fixed vertex. However,we want -fixed vertices to behave like unfixed vertices when it comes to building matching morphisms, and allow them to match any vertex of the target graph. This can be acheived by either removing all -fixing tags from before finding the matching morphism (which is the approach taken by the algorithm in section 7.2) or by relaxing the constraint with a special case for the tag.
We will call the area of the pattern graph not in any !-boxes or only in fixed !-boxes the match surface. This is the graph that would result from killing all the unfixed !-boxes. Importantly, no EXP,KILL or operation can reduce the match surface. Furthermore,if every !-box of every instance of contains at least one node-vertex or circle,EXP and must always increase the number of node-vertices or circles in the match surface.
We will take an inductive approach to finding instances of that are candidates for a matching onto ,starting with the zero-length instantiation. To build the instantiations of length , for each instantiation of length (resulting in a !-graph ) and each unfixed with depth 0,we will create three new instantiations by appending and to . If the resulting graph has more node-vertices, circles or fixed !-vertices in its match surface than are in ,we will discard that instantiation (and hence not consider it as a prefix for the instantiations of length ).
Remark 7.1.2. Note that there are more efficient versions of the algorithm; for example, it is sufficient to choose an arbitrary depth 0 !-vertex at each stage rather than generating new instantiations for each such -vertex. However,it is easier to see that the algorithm here is correct,and this version suffices to show computability.
It should be clear, given the preceeding discussion, that the algorithm will enumerate all instances of that have a monomorphism to . It remains for us to show that it will terminate,providing every -box of every instance of contains at least one node-vertex or circle.
Let be the number of node-vertices in and the number of circles. Suppose is an instantiation produced by the algorithm at some iteration step,and let and be the corresponding properties for the match surface of . Every EXP and operation must increment either or (and KILL cannot decrease either),and the algorithm will discard any instantiation where
So there can be at most EXP or operations in .
Each of these operations can, at most, double the number of unfixed !-vertices in the graph, and KILL can only reduce that number. Thus if cannot contain more than KILL operations. ,the length of ,is therefore bounded by
which is a finite number fixed by and .
The same bound also places a limit on ,limiting the number of ways can be extended. The total number of steps in the algorithm is then bounded, and hence it terminates.
Note that this approach encompasses rewriting both string graphs and !-graphs with !-graph rewrite rules; the former is simply a special case of the latter,without any operations.
7.2 Matching in Quantomatic
Quantomatic interleaves the three steps of !-graph matching (instantiation, wire homeomorphism and building a suitable morphism) in a branching algorithm. In this section, we will describe this approach. We will start with the algorithm as currently implemented, which assumes the target graph has no -vertices,and then describe how we plan to extend it to allow arbitrary -graphs as the target.
The algorithm maintains a partial graph monomorphism from the pattern graph to the (concrete) target graph. It terminates when this is a complete morphism, and is successful if it is a local isomorphism.
The overall approach is to extend the graph morphism to the match surface (those parts of the pattern graph not contained in any unfixed !-boxes), then choose a top-level !-vertex and try killing it or expanding it. In the expansion branch, the match surface will have increased, so we repeat the graph morphism extension step,before trying another -vertex. In the killing branch,we proceed straight to choosing another -vertex. At each choice that could affect the final match,the algorithm branches, so as to explore all options. If it fails to extend the graph morphism to the match surface at any point, that branch terminates with failure.
Circles are matched first in each round of extending the graph morphism. Circle matching involves taking each unmatched circle in and matching it to an arbitrary unmatched circle of the same type in ,and removing the vertex from . Since circles are indistinguishable under rewriting (being wire-homeomorphic to each other), no branching needs to occur. Doing this first therefore reduces the amount of work duplicated across branches of the algorithm.
Figure 7.1: !-graph matching onto string graphs
Figure 7.2: String graph matching subroutine
Bare wires are left until the very end of the matching process, after all the instantiation has been done. This is because (as noted in the discussion on wild !-boxes) bare wire matching does little to constrain the possible instantiations and can generate a lot of branches. For each bare wire in the match surface,the bare wire matching step branches for each unmatched edge in connected to a wire-vertex of the same type as the bare wire. It then replaces with three edges and two wire-vertices (of the appropriate type):
and matches to the new wire-vertices and the edge between them.
To deal with wire homeomorphism, the algorithm starts with a variant of the normalisation scheme from section 3.6.2 every interior wire has two wire-vertices; bare wires have only the input and output vertices; circles have a single wire-vertex; and input and output wires have only the input or output vertex in the pattern graph but an extra wire-vertex in the target graph. This ensures that a wire that can accept a match from a bare wire (ie: any wire other than an interior wire already matched by an interior wire or a circle matched by a circle) always has at least one unmatched edge when the bare wire matching step is reached.
During a run of the algorithm to match a !-graph onto a string graph ,Quantomatic maintains the following state:
: a normalised (as described above) !-graph
: an instance of
: a partial instantiation of ,resulting in
: a normalised (as described above) string graph
: a partial injective function describing the matching so far
: a set of unmatched vertices of
: a set of partially-matched node-vertices
: a set of scheduled partially-matched node-vertices
: a set of vertices that can be matched onto
Note that we do not maintain a map of edges; this is because !-graphs are simple, so if a function from vertices of one graph to vertices of another extends to a graph morphism, it does so uniquely.
is partitioned into ,containing wire-vertices in circles (of which there can only be one for each circle), ,containing other wire-vertices,and ,containing node-vertices.
A note on terminology: a vertex is matched if it is in either the domain or image of . If is a matched node-vertex of ,we call it (and its image ) partially matched if some of the neighbourhood of is not in the image of . Otherwise,it is completely matched. The same terminology applies to . For example,consider the following incomplete matching:
has been matched to a vertex in the target graph,but we will need to expand the !-box twice before we are done with . This is an example of a partially matched node-vertex.
is (a superset of) the set of partially matched vertices of ,and is intended to contain those node-vertices of that may be able to become completely matched due to the application of a !-box operation.
We split the algorithm description into two parts. Figure 7.1 contains the part of the algorithm that explores the instantiation space, and figure 7.2 has the part that extends the match morphism. This latter part can be used in a simpler wrapper to match string graphs onto string graphs.
Red wires (with labels starting "each") indicate a branching point. A branch is created for each possibility. If there are no possibilities, the current branch dies. So, for example, if the algorithm takes a vertex from and there are no possible matchings for it in the target graph,that branch will be killed off.
The initial match state has and normalised (in the manner described above), ,and populated with all the vertices of not in bare wires.
A proof of the correctness of this algorithm can be found in appendix A
7.2.1 Matching Onto !-Graphs
In this section, we describe how the previous algorithm can be extended to match arbitrary !-graphs. This algorithm has been implemented for Quantomatic (albeit not in a released version yet).
Firstly, we need to maintain information about which !-vertices are fixed. Since !-vertices are only fixed at intermediate stages of a proof (when using INDUCT or during the matching process itself), we maintain the information about which !-vertices are fixed with which tags separately to the graphs. Thus we extend the match state (and algorithm inputs) with two partial maps, and
We also add two further sets to the match state to keep track of !-vertices:
: a set of unmatched fixed !-vertices
: a set of -vertices that can be copied
Figure 7.3: !-graph matching onto !-graphs
Figure 7.4: !-box matching subroutine
: a set of !-vertices that can be expanded
The initial match state is mostly empty as before,with and again normalised. We also populate with the vertices of that are not in any !-box,and with all the vertices of that are not in any -box. The new parts of the match state are populated as follows:
The string graph matching subroutine is extended (via "hooks" in the implementation) to allow the matching to be constrained by !-box membership. In particular,we only allow a vertex to match a vertex if ,where and :
Also implicit in the algorithm description is that when we kill a !-vertex ,we always add any vertices in that were adjacent to to ,so that the next string graph matching run can remove them from if they are now completely matched.
Additionally,one part of figure 7.4 refers to a "candidate for ". If ,we say that is a match candidate for if
it shares the same parents (ie: ),and
if is fixed, is fixed with the same tag (ie: )
The major difference between this algorithm and the previously-presented one is that is provided as an alternative choice to and when the algorithm deals with the next top-level !-vertex of .
Chapter 8
Conclusions and Further Work
In this dissertation, we have demonstrated how the string graph formalism for the diagrammatic languages of traced symmetric monoidal categories and compact closed categories can be extended to allow the finitary representation of infinite families of string graphs and string graph equations, and how these !-graphs and !-graph equations can be used in conjunction with double-pushout graph rewriting to do equational reasoning both with and on infinite families of morphisms in these categories. We have also presented some inference rules for a nascent logic of !-graphs.
We started by extending the language of monoidal signatures to allow for nodes with variable-arity edges, representing families of morphisms such as the spiders induced by commutative Frobenius algebras.
We then further extended string graphs with a language, internal to the graph, describing potentially infinite families of string graphs. We extended the notion of string graph equations to these !-graphs, constructing !-graph equations (and rewrite rules) to represent infinite familes of string graph equations (and rewrite rules).
We demonstrated how these !-graph rewrite rules can be used to both rewrite string graphs and to rewrite !-graphs, producing new !-graph rewrite rules. We showed that this rewriting is sound with respect to the interpretation of -graphs and -graph rewrite rules as families of string graphs and string graph rewrite rules.
We described an equational logic of -graphs implemented by -graph rewriting and built on this with further inference rules, forming a logic of !-graphs that is again sound with respect to the interpretation of -graphs as families of string graphs. This included a graphical analogue of induction, which we used to derive the spider law for commutative Frobenius algebras.
We rounded off by showing how !-graph rewriting can be implemented. We demonstrated that, providing a !-graph rewrite rule has no instantiations with wild !-boxes, it is possible to determine the finite set of instances of the rule that apply to a given !-graph, and gave an example of an algorithm to find matchings, with their associated instantiations, from the LHS of a !-graph rewrite rule to a string graph or !-graph.
8.1 Further Work
The most obvious next step would be to remove the requirement that variable-arity edges of the same type commute by placing an order on the (variable-arity) edges of a node and using this order when determining the value of the elementary subgraph containing that node. This would allow us to have spiders for non-commutative Frobenius algebras.
A visual way to represent this could be to place a "starting mark" on a node, and count clockwise from there:
Then we would have that the following two graphs are not the same:
and, in particular, we would not allow a graph homomorphism from one to the other. The implications of this need working out in detail to ensure there are no unexpected side-effects; for example, there is a potential difference between expanding a !-box clockwise and anti-clockwise. It would allow more expressivity, though, and the commutative version presented in this thesis should be recoverable by placing constraints on the allowed valuations.
Another idea that could be investigated is "defined" generators. For example, suppose we have a commutative monoid . We could define a "spidered" version of this in the following manner:
This essentially defines two !-graph rewrite rules
Some restrictions would need to be placed on such a rewrite system, such as termination and preservation of value, in order for it to be considered a definition, but this could be a powerful tool for proving theorems. Appendix B uses it to prove the spider law for a commutative Frobenius algebra.
The above definition has a major drawback, though, in that it implicitly requires that the monoid is commutative. For example, suppose we have the graph
This can be expanded to
using the rewrite system implied by the definition, but it can also be expanded to
which, in the absense of commutativity and associativity laws, could have a different value. This is because all the incoming edges of the variable-arity multiplication have the same type, and so we have a choice of matches when rewriting.
However, if we combine this idea of definitions with the non-commutative generators already mentioned, we can make the definition
which will produce a unique expansion for each possible starting graph. In particular, it forces the incoming edges to this node to be expanded in clockwise order, so
has the unique expansion
There are undoubtably more inference rules to be discovered. For example,a variant on (section 6.3) that adds a fresh -vertex to but only adds edges to itself and to vertices in is likely to give rise to a similar inference rule to Box, but this has yet to be proved sound.
Kissinger introduced a notion of critical pairs for graphs in [22]. The rewriting of !-graphs laid out in this thesis could potentially be used, in combination with Kissinger's definition of critical pairs, to compute confluent extensions to rewrite systems of !-graphs using a Knuth-Bendix completion algorithm[19].
Of course, one of the most important steps to take is to actually implement the ideas presented in this thesis. In particular, we are planning to implement and prove the correctness of the algorithm in section 7.2.1. This algorithm could no doubt be improved, however, to make it more efficient. For example, many graphs involving commutative Frobneius algebras have a lot of internal symmetry, which results in multiple matches being found that all produce the same rewrite; it would be desirable to eliminate this duplication,preferably at the matching stage
The implementation of the ideas in this thesis is work that is currently underway in the Quan-tomatic project [25], which aims to be a useful proof assistant for string diagrams in compact closed and traced symmetric monoidal categories. This encompasses QuantoCosy, a tool for generating rules from models (as set out in [24]), QuantoDerive, a tool for constructing and checking graphical proofs, and QuantoTactic, an Isabelle 31 tactic implementation using Quantomatic's graph-matching engine to direct proofs and checking their correctness in Isabelle.
Some initial work on this was done by Matvey Soloviev when the theory of !-graphs and string graphs was still in flux, but nothing was ever published; this could be revisited now that there is a secure foundation.
Appendix A
Correctness of Quantomatic's Matching
We prove the correctness of the algorithm presented in section 7.2 for matching !-graphs onto string graphs.
Recall the match state:
: the pattern graph (normalised)
: an instance of
: the target graph (normalised)
: a partial instantiation of ,resulting in
: a partial injective function describing the matching so far
: a set of unmatched vertices of
: a set of partially-matched node-vertices
: a set of schedules partially-matched node-vertices
: a set of vertices that can be matched onto
Throughout the entire procedure, in addition to the invariants implicit in the typing of the components of the match state, we maintain the following global invariants:
(1) is injective and respects vertex types
(2) For any ,if there is an edge from to in ,then there is an edge from to in
(3) and for any node-vertex ,all wire-vertices adjacent to in are in the image of
Given that !-graphs are simple,invariants (1) and (2) ensure that when is total,it uniquely extends to a !-graph monomorphism . Invariant (3) ensures that,if is empty, will be a local isomorphism. Thus will be a matching of onto under .
A. 1 The String Graph Matching Subroutine
The algorithm is shown in figure A.1.
A.1.1 Preconditions/Invariants
In addition to the global invariants, the string graph matching subroutine has the following require-
ments on the match state. What is more, these are maintained throughout the procedure.
(i) and are both normalised
(ii) (ie: the vertices in the unmatched set are not part of the existing matching)
(iii) Each vertex in is adjacent to something in (ensures we can always reach wire vertices in by starting from a node-vertex in or )
(iv) If and (everything that is completely matched and in is also in )
(v) If and is a wire-vertex adjacent to in ( contains complete wires)
(vi) If is a wire-vertex,then any vertices it is adjacent to are also in
(vii) If is a wire-vertex,then any vertices it is adjacent to are in either or
(viii)
A.1.2 Postconditions
These are true for the match state that results from each successful branch:
(I) (every vertex marked for matching was handled)
(II) is the union of the initial states of and (we have matched exactly what we were asked to)
(III) contains only vertices that were in either the initial state of or the initial state of (matches were only made against vertices in )
(IV) is exactly the set of vertices in whose image is adjacent to a wire-vertex not in ( contains exactly the partially-matched vertices)
(V) and are identical to their starting states
Figure A.1: String Graph Matching Subroutine
Postcondition (II) (in conjunction with the global invariants) states that the algorithm has completed the requested work. Postcondition (1) just states that we left the match state "clean"; it still satisfies all the preconditions and repeating the algorithm without altering the state further would do nothing. Postcondition (IV) says that is the smallest set that satisfies global invariant (3).
A.1.3 Termination
For termination of the inner loop exploring the wires incident to a node-vertex ,we consider the variant ,where is the neighbourhood of in . In each of these branches, decreases by either 1 or 2,and nothing is ever added to (and is fixed). So this loop must terminate.
The loop working through has two branches: when it is not empty and when it is not. Either may add more things to . We will take as variant . When this is 0,the algorithm will terminate. Otherwise,if is empty, will have one vertex removed and added to ,decreasing by one. If is not empty,one vertex will be removed from ,decreasing by one. Every vertex that is added to in this loop is taken from ,further decreasing . As a result, will decrease by at least one with every iteration of the loop.
A.1.4 Correctness
There are seven steps of the algorithm that modify the match state; we will show that each of them satisfies the following properties, under the assumption that the preconditions (section A.1.1), global invariants and match state type constraints hold at the start of the step (note that when we say that a vertex is taken from a set, we mean that it was in the set at the start of the step, and is removed from the set by the end of the step):
(a) and are unmodified
(b) If or are modified,the only modifications are to take vertices from and extend to map those vertices to ones of the same type taken from
(c) If a wire-vertex is added to ,then so is the entire wire,and for any pair of vertices in the wire with an edge from to ,there is an edge from to (at the end of the step) in
(d) Any vertex taken from is added to both and ,and these are the only vertices added to either or
(e) If or are modified,the only modifications are the one from (d),removing a partially matched vertex with no adjacent vertices in from or removing a completely matched vertex with no adjacent vertices in from and
Given that these properties hold, we can show that all steps of the algorithm preserve the match state type constraints, the global invariants and the preconditions.
We can see that (a) holds for all seven steps by inspection. It is clear that the other three properties hold for steps (1)-(3), again by inspection. Only steps (5)-(7) need extra justification. We first note that the selection of demands that,since is in normal form,it is part of an interior wire with two wire-vertices or is the input on an input wire or the output on an output wire (and the sole wire-vertex on the wire in both cases).
For step (5), the only non-obvious property is c c). However, by what we have already noted, since is not on an interior wire it must be an input or output. Since and are the only vertices on the wire and is already in ,and the edge between and is the same direction as the edge between and ,property (c) holds for (5).
In step (6),it is clear that and are in and is in . Precondition (v) ensures that is in . Since and are !-graphs, must have the same type as ,which has the same type as ,which has the same type as . So property (b) clearly holds. Property (c) can be seen to hold by noting that the edge between and is in the same direction as the edge between and ,and all the edges in a wire must be in the same direction. Properties (d) and (e) trivially hold,as , and are not modified.
For step (7), we note that precondition (iii) (together with global invariant 3 and the fact that requires that,since and must be in . Then properties (b) and (c) follow in the same manner as for (6), and properties (d) and (e) clearly hold.
We now show that these properties mean that the match state type constraints, global invariants and preconditions are all maintained.
Type constraints The typing constraints for and are trivially maintained as (b) demands they are never changed. Properties (d) and (e) ensure that is only ever added to from and that anything added to is also added to ,and anything removed from is also removed from ,so those constraints hold. Finally,the constraint on holds by property (b)
Global Invariants Injectivity of is preserved as property (b) demands that only vertices drawn from may be used for images of additions to ,and precondition (viii) gives us that these must
Suppose there is a an edge in from a vertex to another vertex ,and both are in at the end of a step, but one was not at the start of the step. We know they cannot both be node-vertices,and so one or both are wire-vertices. Since at least one was not in the image of at the start of the step, none of the wire-vertices can have been by precondition (vi). Then property (c) ensures that there is an edge from to . So global invariant (2) is preserved.
Global invariant (3) is preserved when vertices are removed from due to property vertices are added to due to property (b).
Preconditions/Invariants
(i) and are both in normal form
By property (a),which ensures and are never changed.
(ii)
Property (b) requires that every vertex added to is removed from .
(iii) Each vertex in is adjacent to something in
Given that property (b) prevents anything being added to ,this precondition can only be broken by removing something from or . Property (d) ensures that anything removed from is added to ,and property (e) ensures that anything removed from has no adjacent vertices in .
(iv) If and
Property (e) ensures that when we add anything to ,we also add it to ,and we only remove completely matched vertices from .
(v) If and is a wire-vertex adjacent to in
Property (c) (in conjunction with property (b)) ensures that only entire wires are removed from at once (and nothing is ever added to ).
(vi) If is a wire-vertex,then any vertices it is adjacent to are also in This is preserved due to property (c).
(vii) If is a wire-vertex,then any vertices it is adjacent to are in either or This is maintained by property (b),which ensures that anything removed from is added to and,due to precondition (ii),also ensures that nothing is ever removed from .
(viii)
This is preserved by property (b),which ensures that nothing is ever added to ,and anything added to is removed from .
Postconditions
(I)
The conditions that lead to the "done" step ensure that and are empty. Then precondition (iii) means that must be empty. is empty at the end because it is empty when step (1) is complete, and property (b) ensures it is never added to.
(II) is the union of the initial states of and
Property (b) ensures that everything removed from is added to ,that nothing else is added to and that nothing is removed from . Combining this with precondition (ii) gives us that no entry of is every overwritten,and so postcondition (1) provides us with what we need.
(III) contains only vertices that were in either the initial state of or the initial state of By property (b)
(IV) is exactly the set of vertices in whose image is adjacent to a vertex not in Global invariant (3) gives us most of what we need. We just need that if is completely matched,it is not in . But this is guaranteed by invariant (iv) and the fact that .
(V) and are identical to their starting states
This follows immediately from property (a).
Completeness
Let be a matching of the full subgraph of given by the vertices into that restricts to on the vertices in . Then we need to show that,after completion of the algorithm,one of the match states produced will agree with on vertices,up to a choice of circles. Note that,by the argument at the start of section A,this will then uniquely extend to .
Since is a matching with in its domain (and and restricts to ),there must be at least unmatched circles in ,and so the step where we match circles must succeed.
Invariant: the vertex map part of restricts to is extended in two places (after circle-matching). Each of these follows a branching point. We need to show that in both of these places, there is a branch that will maintain this invariant. We also need to show that none of the "die" branches are taken.
When adding something from ,it is clear that is a valid matching for ,and so this is a branch that will maintain the invariant.
When adding something from ,the wire starting must be a valid matching for the wire starting with ,as the entire wire must be in the domain of : if is an input or output,this is trivial. Otherwise,there must be a wire-vertex adjacent to ,and a node-vertex adjacent to . Then is in ,and hence in ,by precondition 0 v or ,by precondition (iii),and so in or ,and hence is in . Once we have matched against ,the only possible matching for is ,and similarly must match against . If is already in ,we know it maps to ,so the matchings must agree. So this branch maintains the invariant (and does not die).
Thus,regardless of how vertices are chosen from or ,there must be a trace that returns a match where agrees with on vertices.
A. 2 The Outer Loop
The algorithm is shown in figure A.2. The steps that "apply" a l-box operation implicitly apply the operation to and record it in .
Figure A.2: !-graph matching onto string graphs
Note that the global invariants are trivially satisfied at the start of the algorithm,since and are empty. They are maintained thereafter as only the string graph matching subroutine and the bare wire matching routines alter or ,and the !-box operations (which are the only things to alter ) can only add edges where either the source or the target is not in .
A.2.1 Termination
The algorithm terminates, on the assumption that no sequence of !-box operations will ever produce a wild !-box, by the following argument.
Each step terminates, and there is only one loop. As a variant, we take
We calculate the variant at the point where we consider terminating the loop, after the string graph matching step. If the previous iteration involved killing one or more !-boxes, will decrease (by the combined size of the !-boxes, less bare wires - this amount is positive due to the assumption about wild !-boxes). If the previous iteration involved expanding a !-box, will increase (by the size of the !-box,excluding any bare wires). is clearly bounded below by 0,and is bounded above by ,by global invariant (1),so is bounded below by 0 and decreases with every iteration. Hence the algorithm terminates.
Note that Quantomatic's implementation ensures termination for all pattern graphs by killing any wild !-boxes as soon as they are produced.
A.2.2 String Graph Matching Preconditions
We need to show that the preconditions in section A.1.1 are satisfied whenever the string graph matching step is run.
(i) and are both normalised
We normalise the graphs at the start. We then do not alter until the bare wire matching step, at which point the string graph matching routine can no longer be called. Because !-boxes are open subgraphs, operations on them always remove or copy an entire wire at once, which ensures that remains normalised throughout.
(ii)
We only ever add unmatched vertices to .
(iii) Each vertex in is adjacent to something in
This follows from openness of !-boxes, which (together with normalisation) ensures that any wire-vertex in the match surface that is not on a circle or bare wire must be adjacent to a node-vertex in the match surface. The first time string graph matching happens, all such node-vertices are put in .
Since all non-bare-wire,non-circle vertices in the match surface are added to each time, and so added to by the string graph matching step,on subsequent iterations of string graph matching any wire-vertex in must have arisen from the expansion of a !-box (since this is the only way the match surface can be extended). If is adjacent to a node-vertex that came from the !-box expansion,that node-vertex will be added to . Otherwise,it must be adjacent to a node-vertex in ,and hence in (by the postconditions of the string graph matching step). Then this node-vertex (being adjacent to the expanded !-box) will be added to .
(iv) If and (everything that is completely matched and in is also in )
The first time we run the string graph matching step, is empty. After each string graph matching step,the postconditions ensure that there is no completely matched vertex in . As the rest of the algorithm does not add anything to or alter ,this is maintained throughout.
(v) If and is a wire-vertex adjacent to in .
This is guaranteed by openness of !-boxes: we only ever add entire wires to .
(vi) If is a wire-vertex,then any vertices it is adjacent to are also in This is guaranteed by the postcondition (II) in combination with precondition (v) and the fact that is only altered by the string graph matching step.
(vii) If is a wire-vertex,then any vertices it is adjacent to are in either or
and are only modified by the string graph matching subroutine,which maintains this invariant. Initially, is empty,causing it to hold trivially.
(viii)
As above.
A.2.3 Correctness
At the end of the algorithm, everything that was ever in the match surface except for bare wires has been added to by the string graph matching step (postcondition (II),together with the fact that we always add everything unmatched in the match surface to and ,except bare wires). We have no more !-boxes,so the entire graph is part of the match surface. is also empty. We matched bare wires at the last stage,so is in fact total on . So,by the argument at the start of section A is a valid match from the final state of to . Since we have only ever performed valid -box operations on ,this is a valid instance of ,and hence we have found a valid matching from to .
A.2.4 Completeness
Providing no instance of contains a wild !-box (see section 7.1),completeness follows from the fact that any concrete instantiation has an equivalent in expansion-normal form (section 6.2.2).
Appendix B
The Spider Law
The following is joint work with Aleks Kissinger. It is work in progress, as we are still working out the details of the definition style used in this section and intend to extend this to non-commutative generators, as mentioned in section 8.1
Suppose we have a commutative Frobenius algebra in string graph form. So we have the following generators (node-vertices of the typegraph, with their adjacent edges)
where all the edges are fixed-arity, and all the wire-vertices are of the same type. Note that, in the sequel, we will implicitly identify distinguished fixed-arity edges by the position in our presentation of the graph. This is unambiguous with these particular generators, as we can simply count clockwise from the single output of the multiplication node, or the single input of the comultiplication node.
We also have the following equations, where we identify inputs and outputs by position:
It is trivial to use the commutativity laws to derive Suppose we now define a !-boxed version of the multiplication operation in the following manner:
Essentially, we have added a new generator with a variable-arity input and a fixed-arity output, together with rules that allow us to rewrite any instance of it (with any number of incoming edges) into a graph containing only our original set of generators. As noted in section 8.1, this definition assumes associativity and commutativity in order to be consistent.
We now show that a partial form of the spider law holds for variable-arity multiplications.
Lemma B.0.1.
Proof. We proceed by !-box induction. The base case:
and the step case:
We define a variable-arity comultiplication in a similar manner:
and prove a similar result about it:
Lemma B.0.2.
Proof. We proceed by !-box induction. The base case: and the step case:
Now we are ready to define our spider:
And the spider law:
Theorem B.0.3. Proof.
Of course, we need to ensure that our defined spider coincides with the original Frobenius algebra operations:
Theorem B.0.4.
Bibliography
[1] S. Abramsky and B. Coecke. A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science (LICS), pages 415-425. IEEE, February 2004.
[2] F. Akinniyi, A. C. Wong, and D. Stacey. A new algorithm for graph monomorphism based on the projections of the product graph. IEEE Transactions on Systems, Man, and Cybernetics, 16(5):740-751, September 1986.
[3] F. Baader and T. Nipkow. Term Rewriting and All That. Cambridge University Press, 1998.
[4] I. Boneva, A. Rensink, M. E. Kurbán, and J. Bauer. Graph abstraction and abstract graph transformation. Technical report, University of Twente, July 2007.
[5] B. Coecke and R. Duncan. Interacting quantum observables: categorical algebra and diagram-matics. New Journal of Physics, 13(4):043016, April 2011.
[6] B. Coecke and A. Kissinger. The compositional structure of multipartite quantum entanglement. In Proceedings of the 37th International Colloquium on Automata, Languages and Programming (ICALP), volume 6199 of Lecture Notes in Computer Science, pages 297-308. Springer Berlin Heidelberg, February 2010.
[7] B. Coecke, A. Kissinger, A. Merry, and S. Roy. The GHZ/W-calculus contains rational arithmetic. In Proceedings of the CSR 2010 Workshop on High Productivity Computations, volume 52 of Electronic Proceedings in Theoretical Computer Science, pages 34-48, March 2011.
[8] B. Coecke and D. Pavlovic. Quantum measurements without sums. In G. Chen, L. Kauffman, and S. J. Lomonaco, editors, The Mathematics of Quantum Computation and Technology, pages 559-596. Taylor and Francis, August 2007.
[9] L. Dixon and R. Duncan. Graphical reasoning in compact closed categories for quantum computation. Annals of Mathematics and Artificial Intelligence, 56(1):23-42, July 2009.
[10] L. Dixon, R. Duncan, and A. Kissinger. Open graphs and computational reasoning. In Proceedings of the 6th on Developments in Computational Models (DCM), volume 26 of Electronic Proceedings in Theoretical Computer Science, pages 169-180, June 2010.
[11] L. Dixon and A. Kissinger. Open-graphs and monoidal theories. Mathematical Structures in Computer Science, 23(2):308-359, February 2013.
[12] R. Duncan and M. Lucas. Verifying the Steane code with Quantomatic. In Proceedings of the 10th International Workshop on Quantum Physics and Logic (QPL), 2013.
[13] R. Duncan and S. Perdrix. Rewriting measurement-based quantum computations with generalised flow. In Proceedings of the 37th International Colloquium on Automata, Languages and Programming (ICALP), volume 6199 of Lecture Notes in Computer Science, pages 285-296. Springer Berlin Heidelberg, 2010.
[14] H. Ehrig and H.-J. Kreowski. Church-Rosser theorems leading to parallel and canonical derivations for graph-grammars. Technical report, TU Berlin, 1975.
[15] H. Ehrig, M. Pfender, and H. J. Schneider. Graph-grammars: an algebraic approach. In Conference Record of the 14th Annual Symposium on Switching and Automata Theory (SWAT), pages 167-180. IEEE, October 1973.
[16] J.-Y. Girard. Linear logic. Theoretical Computer Science, 50(1):1-101, 1987.
[17] D. Gries and F. B. Schneider. A Logical Approach to Discrete Mathematics. Springer Verlag, 1993.
[18] M. Hasegawa, M. Hofmann, and G. Plotkin. Finite dimensional vector spaces are complete for traced symmetric monoidal categories. In A. Avron, N. Dershowitz, and A. Rabinovich, editors, Pillars of Computer Science, Lecture Notes in Computer Science, pages 367-385. Springer Berlin Heidelberg, 2008.
[19] G. Huet. A complete proof of correctness of the Knuth-Bendix completion algorithm. Journal of Computer and System Sciences, 23(1):11-21, August 1981.
[20] A. Joyal and R. Street. The geometry of tensor calculus I. Advances in Mathematics, 88(1):55- 112, July 1991.
[21] A. Joyal, R. Street, and D. Verity. Traced monoidal categories. Mathematical Proceedings of the Cambridge Philosophical Society, 119(03):447-468, April 1996.
[22] A. Kissinger. Graph rewrite systems for classical structures in -symmetric monoidal categories. MSc Thesis, University of Oxford, 2008.
[23] A. Kissinger. Pictures of Processes. DPhil Thesis, University of Oxford, 2011.
[24] A. Kissinger. Synthesising graphical theories. In Proceedings of the 1st Workshop on Automated Theory eXploration (ATX), February 2012.
[25] A. Kissinger, A. Merry, L. Dixon, R. Duncan, M. Soloviev, and B. Frot. Quantomatic, https://sites.google.com/site/quantomatic/.
[26] A. Kissinger, A. Merry, and M. Soloviev. Pattern graph rewrite systems. In Proceedings of the 8th International Workshop on Developments in Computational Models (DCM), Electronic Proceedings in Theoretical Computer Science, 2012.
[27] S. Lack and P. Sobociński. Adhesive and quasiadhesive categories. Theoretical Informatics and Applications, 39(3):511-545, 2005.
[28] A. Lang and B. Coecke. Trichromatic open digraphs for understanding qubits. In Proceedings of the 8th International Workshop on Quantum Physics and Logic (QPL), volume 95 of Electronic Proceedings in Theoretical Computer Science, pages 193-209, October 2011.
[29] S. Mac Lane. Categories for the Working Mathematician. Springer Verlag, 2nd edition, 1998.
[30] U. Nickel, J. Niere, and A. Zündorf. The FuJaBa environment. In Proceedings of the 22nd International Conference on Software Engineering (ICSE), pages 742-745, 2000.
[31] T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL: A Proof Assistant for Higher-Order Logic. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2002.
[32] R. Penrose. Applications of negative dimensional tensors. In D. J. A. Welsh, editor, Combinatorial Mathematics and its Applications, pages 221-244. Academic Press, New York, New York, USA, 1971.
[33] J. L. Pfaltz and A. Rosenfeld. Web grammars. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, pages 609-619. Morgan Kaufmann Publishers Inc., 1969.
[34] A. Rensink. Nested quantification in graph transformation rules. In Proceedings of the 3rd International Conference on Graph Transformations (ICGT), volume 4178 of Lecture Notes in Computer Science, pages 1-13. Springer Berlin Heidelberg, September 2006.
[35] A. Rensink and J.-H. Kuperus. Repotting the geraniums: on nested graph transformation rules. Electronic Communications of the EASST, 18, 2009.
[36] H. J. Schneider. Chomsky-Systeme für partielle Ordnungen. Technical report, Universität Erlangen, 1970.
[37] P. Selinger. A survey of graphical languages for monoidal categories. In B. Coecke, editor, New Structures for Physics, volume 813, pages 289-355. Springer Berlin Heidelberg, August 2009.
[38] F. Stallmann. A model-driven approach to multi-agent system design. PhD Thesis, University of Paderborn, 2008.
[39] G. Taentzer. Parallel and distributed graph transformation: formal description and application to communication-based systems. PhD Thesis, TU Berlin, 1996.
[40] J. R. Ullmann. An algorithm for subgraph isomorphism. Journal of the ACM, 23(1):31-42, January 1976.
[41] P. M. van den Broek. Algebraic graph rewriting using a single pushout. In Proceedings of the International Joint Conference on Theory and Practice of Software Development (TAPSOFT), volume 493 of Lecture Notes in Computer Science, pages 90-102. Springer Berlin Heidelberg, 1991.